2026-03-07 00:00:06.858987 | Job console starting 2026-03-07 00:00:06.884386 | Updating git repos 2026-03-07 00:00:07.186730 | Cloning repos into workspace 2026-03-07 00:00:07.403329 | Restoring repo states 2026-03-07 00:00:07.423189 | Merging changes 2026-03-07 00:00:07.423212 | Checking out repos 2026-03-07 00:00:07.785416 | Preparing playbooks 2026-03-07 00:00:08.753803 | Running Ansible setup 2026-03-07 00:00:15.817293 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-03-07 00:00:18.280270 | 2026-03-07 00:00:18.280447 | PLAY [Base pre] 2026-03-07 00:00:18.388774 | 2026-03-07 00:00:18.388955 | TASK [Setup log path fact] 2026-03-07 00:00:18.483439 | orchestrator | ok 2026-03-07 00:00:18.537134 | 2026-03-07 00:00:18.537317 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-07 00:00:18.664587 | orchestrator | ok 2026-03-07 00:00:18.709807 | 2026-03-07 00:00:18.709945 | TASK [emit-job-header : Print job information] 2026-03-07 00:00:18.796693 | # Job Information 2026-03-07 00:00:18.796855 | Ansible Version: 2.16.14 2026-03-07 00:00:18.796889 | Job: testbed-deploy-stable-in-a-nutshell-with-tempest-ubuntu-24.04 2026-03-07 00:00:18.796922 | Pipeline: periodic-midnight 2026-03-07 00:00:18.796945 | Executor: 521e9411259a 2026-03-07 00:00:18.796965 | Triggered by: https://github.com/osism/testbed 2026-03-07 00:00:18.796987 | Event ID: 16d29647e22242fe8869806ad52757f6 2026-03-07 00:00:18.815740 | 2026-03-07 00:00:18.815863 | LOOP [emit-job-header : Print node information] 2026-03-07 00:00:19.118080 | orchestrator | ok: 2026-03-07 00:00:19.118927 | orchestrator | # Node Information 2026-03-07 00:00:19.118988 | orchestrator | Inventory Hostname: orchestrator 2026-03-07 00:00:19.119047 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-03-07 00:00:19.119073 | orchestrator | Username: zuul-testbed06 2026-03-07 00:00:19.119094 | orchestrator | Distro: Debian 12.13 2026-03-07 00:00:19.119118 | orchestrator | Provider: static-testbed 2026-03-07 00:00:19.119139 | orchestrator | Region: 2026-03-07 00:00:19.119160 | orchestrator | Label: testbed-orchestrator 2026-03-07 00:00:19.119179 | orchestrator | Product Name: OpenStack Nova 2026-03-07 00:00:19.119198 | orchestrator | Interface IP: 81.163.193.140 2026-03-07 00:00:19.142112 | 2026-03-07 00:00:19.142224 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-03-07 00:00:19.840146 | orchestrator -> localhost | changed 2026-03-07 00:00:19.846374 | 2026-03-07 00:00:19.846470 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-03-07 00:00:21.861981 | orchestrator -> localhost | changed 2026-03-07 00:00:21.876387 | 2026-03-07 00:00:21.876488 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-03-07 00:00:22.531631 | orchestrator -> localhost | ok 2026-03-07 00:00:22.537424 | 2026-03-07 00:00:22.537513 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-03-07 00:00:22.577149 | orchestrator | ok 2026-03-07 00:00:22.600935 | orchestrator | included: /var/lib/zuul/builds/c8bc494999dd46d891a476a01b0f8e08/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-03-07 00:00:22.618417 | 2026-03-07 00:00:22.618514 | TASK [add-build-sshkey : Create Temp SSH key] 2026-03-07 00:00:24.130586 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-03-07 00:00:24.130761 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/c8bc494999dd46d891a476a01b0f8e08/work/c8bc494999dd46d891a476a01b0f8e08_id_rsa 2026-03-07 00:00:24.130793 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/c8bc494999dd46d891a476a01b0f8e08/work/c8bc494999dd46d891a476a01b0f8e08_id_rsa.pub 2026-03-07 00:00:24.130813 | orchestrator -> localhost | The key fingerprint is: 2026-03-07 00:00:24.130852 | orchestrator -> localhost | SHA256:lKAY2JW+WMGw02fQ56B0OczYqQnrpEKsjLpHrhD0els zuul-build-sshkey 2026-03-07 00:00:24.130872 | orchestrator -> localhost | The key's randomart image is: 2026-03-07 00:00:24.130899 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-03-07 00:00:24.130917 | orchestrator -> localhost | | oo+oB.o | 2026-03-07 00:00:24.130937 | orchestrator -> localhost | |. o=*o@... | 2026-03-07 00:00:24.130953 | orchestrator -> localhost | |..+=o*o=o | 2026-03-07 00:00:24.130969 | orchestrator -> localhost | |.o+.*o .. | 2026-03-07 00:00:24.130984 | orchestrator -> localhost | |*+ + . S | 2026-03-07 00:00:24.131005 | orchestrator -> localhost | |=o= . | 2026-03-07 00:00:24.131043 | orchestrator -> localhost | |++ . E | 2026-03-07 00:00:24.131060 | orchestrator -> localhost | |o + o | 2026-03-07 00:00:24.131077 | orchestrator -> localhost | |o+ . | 2026-03-07 00:00:24.131094 | orchestrator -> localhost | +----[SHA256]-----+ 2026-03-07 00:00:24.131141 | orchestrator -> localhost | ok: Runtime: 0:00:00.733086 2026-03-07 00:00:24.138425 | 2026-03-07 00:00:24.138509 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-03-07 00:00:24.165819 | orchestrator | ok 2026-03-07 00:00:24.178259 | orchestrator | included: /var/lib/zuul/builds/c8bc494999dd46d891a476a01b0f8e08/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-03-07 00:00:24.200388 | 2026-03-07 00:00:24.200481 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-03-07 00:00:24.238474 | orchestrator | skipping: Conditional result was False 2026-03-07 00:00:24.245495 | 2026-03-07 00:00:24.245590 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-03-07 00:00:25.321902 | orchestrator | changed 2026-03-07 00:00:25.335536 | 2026-03-07 00:00:25.335629 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-03-07 00:00:25.657580 | orchestrator | ok 2026-03-07 00:00:25.663388 | 2026-03-07 00:00:25.663468 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-03-07 00:00:26.141307 | orchestrator | ok 2026-03-07 00:00:26.147042 | 2026-03-07 00:00:26.147128 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-03-07 00:00:26.746817 | orchestrator | ok 2026-03-07 00:00:26.761091 | 2026-03-07 00:00:26.761194 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-03-07 00:00:26.835911 | orchestrator | skipping: Conditional result was False 2026-03-07 00:00:26.843773 | 2026-03-07 00:00:26.843875 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-03-07 00:00:27.978253 | orchestrator -> localhost | changed 2026-03-07 00:00:27.989747 | 2026-03-07 00:00:27.989838 | TASK [add-build-sshkey : Add back temp key] 2026-03-07 00:00:28.829339 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/c8bc494999dd46d891a476a01b0f8e08/work/c8bc494999dd46d891a476a01b0f8e08_id_rsa (zuul-build-sshkey) 2026-03-07 00:00:28.829525 | orchestrator -> localhost | ok: Runtime: 0:00:00.008098 2026-03-07 00:00:28.835463 | 2026-03-07 00:00:28.835551 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-03-07 00:00:29.513179 | orchestrator | ok 2026-03-07 00:00:29.518268 | 2026-03-07 00:00:29.518351 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-03-07 00:00:29.595940 | orchestrator | skipping: Conditional result was False 2026-03-07 00:00:29.657501 | 2026-03-07 00:00:29.657591 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-03-07 00:00:30.110324 | orchestrator | ok 2026-03-07 00:00:30.127415 | 2026-03-07 00:00:30.127513 | TASK [validate-host : Define zuul_info_dir fact] 2026-03-07 00:00:30.165301 | orchestrator | ok 2026-03-07 00:00:30.174575 | 2026-03-07 00:00:30.174667 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-03-07 00:00:30.654789 | orchestrator -> localhost | ok 2026-03-07 00:00:30.660732 | 2026-03-07 00:00:30.660820 | TASK [validate-host : Collect information about the host] 2026-03-07 00:00:32.506673 | orchestrator | ok 2026-03-07 00:00:32.533369 | 2026-03-07 00:00:32.533481 | TASK [validate-host : Sanitize hostname] 2026-03-07 00:00:32.687436 | orchestrator | ok 2026-03-07 00:00:32.692134 | 2026-03-07 00:00:32.692233 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-03-07 00:00:34.350290 | orchestrator -> localhost | changed 2026-03-07 00:00:34.355359 | 2026-03-07 00:00:34.355441 | TASK [validate-host : Collect information about zuul worker] 2026-03-07 00:00:35.069029 | orchestrator | ok 2026-03-07 00:00:35.073510 | 2026-03-07 00:00:35.073593 | TASK [validate-host : Write out all zuul information for each host] 2026-03-07 00:00:36.469963 | orchestrator -> localhost | changed 2026-03-07 00:00:36.478588 | 2026-03-07 00:00:36.478674 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-03-07 00:00:36.815442 | orchestrator | ok 2026-03-07 00:00:36.820208 | 2026-03-07 00:00:36.820289 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-03-07 00:01:48.195255 | orchestrator | changed: 2026-03-07 00:01:48.196764 | orchestrator | .d..t...... src/ 2026-03-07 00:01:48.196858 | orchestrator | .d..t...... src/github.com/ 2026-03-07 00:01:48.196887 | orchestrator | .d..t...... src/github.com/osism/ 2026-03-07 00:01:48.196910 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-03-07 00:01:48.196932 | orchestrator | RedHat.yml 2026-03-07 00:01:48.218273 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-03-07 00:01:48.218295 | orchestrator | RedHat.yml 2026-03-07 00:01:48.218411 | orchestrator | = 1.53.0"... 2026-03-07 00:02:02.189492 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-03-07 00:02:02.207160 | orchestrator | - Finding latest version of hashicorp/null... 2026-03-07 00:02:02.372861 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-03-07 00:02:03.205272 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-03-07 00:02:03.587345 | orchestrator | - Installing hashicorp/local v2.7.0... 2026-03-07 00:02:04.310479 | orchestrator | - Installed hashicorp/local v2.7.0 (signed, key ID 0C0AF313E5FD9F80) 2026-03-07 00:02:04.374330 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-03-07 00:02:04.888923 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-03-07 00:02:04.888978 | orchestrator | 2026-03-07 00:02:04.888985 | orchestrator | Providers are signed by their developers. 2026-03-07 00:02:04.889015 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-03-07 00:02:04.889021 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-03-07 00:02:04.889034 | orchestrator | 2026-03-07 00:02:04.889039 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-03-07 00:02:04.889043 | orchestrator | selections it made above. Include this file in your version control repository 2026-03-07 00:02:04.889055 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-03-07 00:02:04.889060 | orchestrator | you run "tofu init" in the future. 2026-03-07 00:02:04.889315 | orchestrator | 2026-03-07 00:02:04.889325 | orchestrator | OpenTofu has been successfully initialized! 2026-03-07 00:02:04.889331 | orchestrator | 2026-03-07 00:02:04.889335 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-03-07 00:02:04.889339 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-03-07 00:02:04.889342 | orchestrator | should now work. 2026-03-07 00:02:04.889347 | orchestrator | 2026-03-07 00:02:04.889351 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-03-07 00:02:04.889354 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-03-07 00:02:04.889359 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-03-07 00:02:05.065150 | orchestrator | Created and switched to workspace "ci"! 2026-03-07 00:02:05.065205 | orchestrator | 2026-03-07 00:02:05.065213 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-03-07 00:02:05.065219 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-03-07 00:02:05.065223 | orchestrator | for this configuration. 2026-03-07 00:02:05.188008 | orchestrator | ci.auto.tfvars 2026-03-07 00:02:05.192732 | orchestrator | default_custom.tf 2026-03-07 00:02:06.164001 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-03-07 00:02:06.786766 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-03-07 00:02:07.217594 | orchestrator | 2026-03-07 00:02:07.217650 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-03-07 00:02:07.217658 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-03-07 00:02:07.217690 | orchestrator | + create 2026-03-07 00:02:07.217706 | orchestrator | <= read (data resources) 2026-03-07 00:02:07.217719 | orchestrator | 2026-03-07 00:02:07.217723 | orchestrator | OpenTofu will perform the following actions: 2026-03-07 00:02:07.217835 | orchestrator | 2026-03-07 00:02:07.217849 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-03-07 00:02:07.217854 | orchestrator | # (config refers to values not yet known) 2026-03-07 00:02:07.217858 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-03-07 00:02:07.217863 | orchestrator | + checksum = (known after apply) 2026-03-07 00:02:07.217867 | orchestrator | + created_at = (known after apply) 2026-03-07 00:02:07.217871 | orchestrator | + file = (known after apply) 2026-03-07 00:02:07.217875 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.217892 | orchestrator | + metadata = (known after apply) 2026-03-07 00:02:07.217897 | orchestrator | + min_disk_gb = (known after apply) 2026-03-07 00:02:07.217900 | orchestrator | + min_ram_mb = (known after apply) 2026-03-07 00:02:07.217904 | orchestrator | + most_recent = true 2026-03-07 00:02:07.217909 | orchestrator | + name = (known after apply) 2026-03-07 00:02:07.217913 | orchestrator | + protected = (known after apply) 2026-03-07 00:02:07.217916 | orchestrator | + region = (known after apply) 2026-03-07 00:02:07.217922 | orchestrator | + schema = (known after apply) 2026-03-07 00:02:07.217926 | orchestrator | + size_bytes = (known after apply) 2026-03-07 00:02:07.217930 | orchestrator | + tags = (known after apply) 2026-03-07 00:02:07.217934 | orchestrator | + updated_at = (known after apply) 2026-03-07 00:02:07.217938 | orchestrator | } 2026-03-07 00:02:07.218065 | orchestrator | 2026-03-07 00:02:07.218079 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-03-07 00:02:07.218084 | orchestrator | # (config refers to values not yet known) 2026-03-07 00:02:07.218088 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-03-07 00:02:07.218092 | orchestrator | + checksum = (known after apply) 2026-03-07 00:02:07.218096 | orchestrator | + created_at = (known after apply) 2026-03-07 00:02:07.218100 | orchestrator | + file = (known after apply) 2026-03-07 00:02:07.218103 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.218107 | orchestrator | + metadata = (known after apply) 2026-03-07 00:02:07.218111 | orchestrator | + min_disk_gb = (known after apply) 2026-03-07 00:02:07.218114 | orchestrator | + min_ram_mb = (known after apply) 2026-03-07 00:02:07.218118 | orchestrator | + most_recent = true 2026-03-07 00:02:07.218122 | orchestrator | + name = (known after apply) 2026-03-07 00:02:07.218126 | orchestrator | + protected = (known after apply) 2026-03-07 00:02:07.218130 | orchestrator | + region = (known after apply) 2026-03-07 00:02:07.218134 | orchestrator | + schema = (known after apply) 2026-03-07 00:02:07.218138 | orchestrator | + size_bytes = (known after apply) 2026-03-07 00:02:07.218141 | orchestrator | + tags = (known after apply) 2026-03-07 00:02:07.218145 | orchestrator | + updated_at = (known after apply) 2026-03-07 00:02:07.218149 | orchestrator | } 2026-03-07 00:02:07.218225 | orchestrator | 2026-03-07 00:02:07.218237 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-03-07 00:02:07.218242 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-03-07 00:02:07.218245 | orchestrator | + content = (known after apply) 2026-03-07 00:02:07.218250 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-07 00:02:07.218254 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-07 00:02:07.218258 | orchestrator | + content_md5 = (known after apply) 2026-03-07 00:02:07.218261 | orchestrator | + content_sha1 = (known after apply) 2026-03-07 00:02:07.218265 | orchestrator | + content_sha256 = (known after apply) 2026-03-07 00:02:07.218269 | orchestrator | + content_sha512 = (known after apply) 2026-03-07 00:02:07.218273 | orchestrator | + directory_permission = "0777" 2026-03-07 00:02:07.218277 | orchestrator | + file_permission = "0644" 2026-03-07 00:02:07.218281 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-03-07 00:02:07.218284 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.218288 | orchestrator | } 2026-03-07 00:02:07.218356 | orchestrator | 2026-03-07 00:02:07.218367 | orchestrator | # local_file.id_rsa_pub will be created 2026-03-07 00:02:07.218372 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-03-07 00:02:07.218376 | orchestrator | + content = (known after apply) 2026-03-07 00:02:07.218379 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-07 00:02:07.218383 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-07 00:02:07.218387 | orchestrator | + content_md5 = (known after apply) 2026-03-07 00:02:07.218391 | orchestrator | + content_sha1 = (known after apply) 2026-03-07 00:02:07.218395 | orchestrator | + content_sha256 = (known after apply) 2026-03-07 00:02:07.218399 | orchestrator | + content_sha512 = (known after apply) 2026-03-07 00:02:07.218403 | orchestrator | + directory_permission = "0777" 2026-03-07 00:02:07.218407 | orchestrator | + file_permission = "0644" 2026-03-07 00:02:07.218416 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-03-07 00:02:07.218420 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.218423 | orchestrator | } 2026-03-07 00:02:07.218494 | orchestrator | 2026-03-07 00:02:07.218510 | orchestrator | # local_file.inventory will be created 2026-03-07 00:02:07.218515 | orchestrator | + resource "local_file" "inventory" { 2026-03-07 00:02:07.218519 | orchestrator | + content = (known after apply) 2026-03-07 00:02:07.218522 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-07 00:02:07.218526 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-07 00:02:07.218530 | orchestrator | + content_md5 = (known after apply) 2026-03-07 00:02:07.218534 | orchestrator | + content_sha1 = (known after apply) 2026-03-07 00:02:07.218538 | orchestrator | + content_sha256 = (known after apply) 2026-03-07 00:02:07.218542 | orchestrator | + content_sha512 = (known after apply) 2026-03-07 00:02:07.218546 | orchestrator | + directory_permission = "0777" 2026-03-07 00:02:07.218549 | orchestrator | + file_permission = "0644" 2026-03-07 00:02:07.218553 | orchestrator | + filename = "inventory.ci" 2026-03-07 00:02:07.218557 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.218561 | orchestrator | } 2026-03-07 00:02:07.218636 | orchestrator | 2026-03-07 00:02:07.218647 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-03-07 00:02:07.218652 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-03-07 00:02:07.218656 | orchestrator | + content = (sensitive value) 2026-03-07 00:02:07.218660 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-07 00:02:07.218663 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-07 00:02:07.218667 | orchestrator | + content_md5 = (known after apply) 2026-03-07 00:02:07.218671 | orchestrator | + content_sha1 = (known after apply) 2026-03-07 00:02:07.218675 | orchestrator | + content_sha256 = (known after apply) 2026-03-07 00:02:07.218678 | orchestrator | + content_sha512 = (known after apply) 2026-03-07 00:02:07.218682 | orchestrator | + directory_permission = "0700" 2026-03-07 00:02:07.218686 | orchestrator | + file_permission = "0600" 2026-03-07 00:02:07.218690 | orchestrator | + filename = ".id_rsa.ci" 2026-03-07 00:02:07.218694 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.218698 | orchestrator | } 2026-03-07 00:02:07.218719 | orchestrator | 2026-03-07 00:02:07.218730 | orchestrator | # null_resource.node_semaphore will be created 2026-03-07 00:02:07.218734 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-03-07 00:02:07.218738 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.218742 | orchestrator | } 2026-03-07 00:02:07.218806 | orchestrator | 2026-03-07 00:02:07.218817 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-03-07 00:02:07.218822 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-03-07 00:02:07.218825 | orchestrator | + attachment = (known after apply) 2026-03-07 00:02:07.218829 | orchestrator | + availability_zone = "nova" 2026-03-07 00:02:07.218833 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.218837 | orchestrator | + image_id = (known after apply) 2026-03-07 00:02:07.218841 | orchestrator | + metadata = (known after apply) 2026-03-07 00:02:07.218845 | orchestrator | + name = "testbed-volume-manager-base" 2026-03-07 00:02:07.218848 | orchestrator | + region = (known after apply) 2026-03-07 00:02:07.218852 | orchestrator | + size = 80 2026-03-07 00:02:07.218856 | orchestrator | + volume_retype_policy = "never" 2026-03-07 00:02:07.218860 | orchestrator | + volume_type = "ssd" 2026-03-07 00:02:07.218864 | orchestrator | } 2026-03-07 00:02:07.218926 | orchestrator | 2026-03-07 00:02:07.218937 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-03-07 00:02:07.218941 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-07 00:02:07.218945 | orchestrator | + attachment = (known after apply) 2026-03-07 00:02:07.218949 | orchestrator | + availability_zone = "nova" 2026-03-07 00:02:07.218953 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.218961 | orchestrator | + image_id = (known after apply) 2026-03-07 00:02:07.218965 | orchestrator | + metadata = (known after apply) 2026-03-07 00:02:07.218969 | orchestrator | + name = "testbed-volume-0-node-base" 2026-03-07 00:02:07.218972 | orchestrator | + region = (known after apply) 2026-03-07 00:02:07.218976 | orchestrator | + size = 80 2026-03-07 00:02:07.218980 | orchestrator | + volume_retype_policy = "never" 2026-03-07 00:02:07.218984 | orchestrator | + volume_type = "ssd" 2026-03-07 00:02:07.218988 | orchestrator | } 2026-03-07 00:02:07.219064 | orchestrator | 2026-03-07 00:02:07.219075 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-03-07 00:02:07.219080 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-07 00:02:07.219084 | orchestrator | + attachment = (known after apply) 2026-03-07 00:02:07.219087 | orchestrator | + availability_zone = "nova" 2026-03-07 00:02:07.219091 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.219095 | orchestrator | + image_id = (known after apply) 2026-03-07 00:02:07.219099 | orchestrator | + metadata = (known after apply) 2026-03-07 00:02:07.219103 | orchestrator | + name = "testbed-volume-1-node-base" 2026-03-07 00:02:07.219106 | orchestrator | + region = (known after apply) 2026-03-07 00:02:07.219110 | orchestrator | + size = 80 2026-03-07 00:02:07.219114 | orchestrator | + volume_retype_policy = "never" 2026-03-07 00:02:07.219118 | orchestrator | + volume_type = "ssd" 2026-03-07 00:02:07.219122 | orchestrator | } 2026-03-07 00:02:07.219181 | orchestrator | 2026-03-07 00:02:07.219191 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-03-07 00:02:07.219196 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-07 00:02:07.219200 | orchestrator | + attachment = (known after apply) 2026-03-07 00:02:07.219204 | orchestrator | + availability_zone = "nova" 2026-03-07 00:02:07.219207 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.219211 | orchestrator | + image_id = (known after apply) 2026-03-07 00:02:07.219215 | orchestrator | + metadata = (known after apply) 2026-03-07 00:02:07.219219 | orchestrator | + name = "testbed-volume-2-node-base" 2026-03-07 00:02:07.219223 | orchestrator | + region = (known after apply) 2026-03-07 00:02:07.219227 | orchestrator | + size = 80 2026-03-07 00:02:07.219231 | orchestrator | + volume_retype_policy = "never" 2026-03-07 00:02:07.219234 | orchestrator | + volume_type = "ssd" 2026-03-07 00:02:07.219238 | orchestrator | } 2026-03-07 00:02:07.219300 | orchestrator | 2026-03-07 00:02:07.219310 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-03-07 00:02:07.219314 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-07 00:02:07.219318 | orchestrator | + attachment = (known after apply) 2026-03-07 00:02:07.219322 | orchestrator | + availability_zone = "nova" 2026-03-07 00:02:07.219326 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.219330 | orchestrator | + image_id = (known after apply) 2026-03-07 00:02:07.219333 | orchestrator | + metadata = (known after apply) 2026-03-07 00:02:07.219340 | orchestrator | + name = "testbed-volume-3-node-base" 2026-03-07 00:02:07.219344 | orchestrator | + region = (known after apply) 2026-03-07 00:02:07.219348 | orchestrator | + size = 80 2026-03-07 00:02:07.219352 | orchestrator | + volume_retype_policy = "never" 2026-03-07 00:02:07.219356 | orchestrator | + volume_type = "ssd" 2026-03-07 00:02:07.219359 | orchestrator | } 2026-03-07 00:02:07.219420 | orchestrator | 2026-03-07 00:02:07.219431 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-03-07 00:02:07.219435 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-07 00:02:07.219439 | orchestrator | + attachment = (known after apply) 2026-03-07 00:02:07.219443 | orchestrator | + availability_zone = "nova" 2026-03-07 00:02:07.219447 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.219454 | orchestrator | + image_id = (known after apply) 2026-03-07 00:02:07.219458 | orchestrator | + metadata = (known after apply) 2026-03-07 00:02:07.219462 | orchestrator | + name = "testbed-volume-4-node-base" 2026-03-07 00:02:07.219466 | orchestrator | + region = (known after apply) 2026-03-07 00:02:07.219470 | orchestrator | + size = 80 2026-03-07 00:02:07.219474 | orchestrator | + volume_retype_policy = "never" 2026-03-07 00:02:07.219478 | orchestrator | + volume_type = "ssd" 2026-03-07 00:02:07.219481 | orchestrator | } 2026-03-07 00:02:07.219543 | orchestrator | 2026-03-07 00:02:07.219554 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-03-07 00:02:07.219559 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-07 00:02:07.219562 | orchestrator | + attachment = (known after apply) 2026-03-07 00:02:07.219566 | orchestrator | + availability_zone = "nova" 2026-03-07 00:02:07.219570 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.219574 | orchestrator | + image_id = (known after apply) 2026-03-07 00:02:07.219578 | orchestrator | + metadata = (known after apply) 2026-03-07 00:02:07.219581 | orchestrator | + name = "testbed-volume-5-node-base" 2026-03-07 00:02:07.219585 | orchestrator | + region = (known after apply) 2026-03-07 00:02:07.219589 | orchestrator | + size = 80 2026-03-07 00:02:07.219593 | orchestrator | + volume_retype_policy = "never" 2026-03-07 00:02:07.219597 | orchestrator | + volume_type = "ssd" 2026-03-07 00:02:07.219601 | orchestrator | } 2026-03-07 00:02:07.219658 | orchestrator | 2026-03-07 00:02:07.219669 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-03-07 00:02:07.219674 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-07 00:02:07.219678 | orchestrator | + attachment = (known after apply) 2026-03-07 00:02:07.219682 | orchestrator | + availability_zone = "nova" 2026-03-07 00:02:07.219685 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.219689 | orchestrator | + metadata = (known after apply) 2026-03-07 00:02:07.219693 | orchestrator | + name = "testbed-volume-0-node-3" 2026-03-07 00:02:07.219697 | orchestrator | + region = (known after apply) 2026-03-07 00:02:07.219701 | orchestrator | + size = 20 2026-03-07 00:02:07.219705 | orchestrator | + volume_retype_policy = "never" 2026-03-07 00:02:07.219708 | orchestrator | + volume_type = "ssd" 2026-03-07 00:02:07.219712 | orchestrator | } 2026-03-07 00:02:07.219766 | orchestrator | 2026-03-07 00:02:07.219776 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-03-07 00:02:07.219780 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-07 00:02:07.219784 | orchestrator | + attachment = (known after apply) 2026-03-07 00:02:07.219788 | orchestrator | + availability_zone = "nova" 2026-03-07 00:02:07.219792 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.219796 | orchestrator | + metadata = (known after apply) 2026-03-07 00:02:07.219799 | orchestrator | + name = "testbed-volume-1-node-4" 2026-03-07 00:02:07.219803 | orchestrator | + region = (known after apply) 2026-03-07 00:02:07.219807 | orchestrator | + size = 20 2026-03-07 00:02:07.219811 | orchestrator | + volume_retype_policy = "never" 2026-03-07 00:02:07.219815 | orchestrator | + volume_type = "ssd" 2026-03-07 00:02:07.219818 | orchestrator | } 2026-03-07 00:02:07.219873 | orchestrator | 2026-03-07 00:02:07.219884 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-03-07 00:02:07.219888 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-07 00:02:07.219892 | orchestrator | + attachment = (known after apply) 2026-03-07 00:02:07.219896 | orchestrator | + availability_zone = "nova" 2026-03-07 00:02:07.219900 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.219904 | orchestrator | + metadata = (known after apply) 2026-03-07 00:02:07.219907 | orchestrator | + name = "testbed-volume-2-node-5" 2026-03-07 00:02:07.219911 | orchestrator | + region = (known after apply) 2026-03-07 00:02:07.219921 | orchestrator | + size = 20 2026-03-07 00:02:07.219925 | orchestrator | + volume_retype_policy = "never" 2026-03-07 00:02:07.219929 | orchestrator | + volume_type = "ssd" 2026-03-07 00:02:07.219933 | orchestrator | } 2026-03-07 00:02:07.219987 | orchestrator | 2026-03-07 00:02:07.220038 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-03-07 00:02:07.220043 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-07 00:02:07.220047 | orchestrator | + attachment = (known after apply) 2026-03-07 00:02:07.220050 | orchestrator | + availability_zone = "nova" 2026-03-07 00:02:07.220054 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.220058 | orchestrator | + metadata = (known after apply) 2026-03-07 00:02:07.220062 | orchestrator | + name = "testbed-volume-3-node-3" 2026-03-07 00:02:07.220066 | orchestrator | + region = (known after apply) 2026-03-07 00:02:07.220070 | orchestrator | + size = 20 2026-03-07 00:02:07.220073 | orchestrator | + volume_retype_policy = "never" 2026-03-07 00:02:07.220077 | orchestrator | + volume_type = "ssd" 2026-03-07 00:02:07.220081 | orchestrator | } 2026-03-07 00:02:07.220173 | orchestrator | 2026-03-07 00:02:07.220185 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-03-07 00:02:07.220189 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-07 00:02:07.220193 | orchestrator | + attachment = (known after apply) 2026-03-07 00:02:07.220197 | orchestrator | + availability_zone = "nova" 2026-03-07 00:02:07.220201 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.220204 | orchestrator | + metadata = (known after apply) 2026-03-07 00:02:07.220208 | orchestrator | + name = "testbed-volume-4-node-4" 2026-03-07 00:02:07.220212 | orchestrator | + region = (known after apply) 2026-03-07 00:02:07.220219 | orchestrator | + size = 20 2026-03-07 00:02:07.220223 | orchestrator | + volume_retype_policy = "never" 2026-03-07 00:02:07.220227 | orchestrator | + volume_type = "ssd" 2026-03-07 00:02:07.220231 | orchestrator | } 2026-03-07 00:02:07.220293 | orchestrator | 2026-03-07 00:02:07.220304 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-03-07 00:02:07.220308 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-07 00:02:07.220312 | orchestrator | + attachment = (known after apply) 2026-03-07 00:02:07.220316 | orchestrator | + availability_zone = "nova" 2026-03-07 00:02:07.220320 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.220324 | orchestrator | + metadata = (known after apply) 2026-03-07 00:02:07.220327 | orchestrator | + name = "testbed-volume-5-node-5" 2026-03-07 00:02:07.220331 | orchestrator | + region = (known after apply) 2026-03-07 00:02:07.220335 | orchestrator | + size = 20 2026-03-07 00:02:07.220339 | orchestrator | + volume_retype_policy = "never" 2026-03-07 00:02:07.220342 | orchestrator | + volume_type = "ssd" 2026-03-07 00:02:07.220346 | orchestrator | } 2026-03-07 00:02:07.220401 | orchestrator | 2026-03-07 00:02:07.220412 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-03-07 00:02:07.220416 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-07 00:02:07.220420 | orchestrator | + attachment = (known after apply) 2026-03-07 00:02:07.220424 | orchestrator | + availability_zone = "nova" 2026-03-07 00:02:07.220428 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.220432 | orchestrator | + metadata = (known after apply) 2026-03-07 00:02:07.220435 | orchestrator | + name = "testbed-volume-6-node-3" 2026-03-07 00:02:07.220439 | orchestrator | + region = (known after apply) 2026-03-07 00:02:07.220443 | orchestrator | + size = 20 2026-03-07 00:02:07.220447 | orchestrator | + volume_retype_policy = "never" 2026-03-07 00:02:07.220451 | orchestrator | + volume_type = "ssd" 2026-03-07 00:02:07.220454 | orchestrator | } 2026-03-07 00:02:07.220512 | orchestrator | 2026-03-07 00:02:07.220522 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-03-07 00:02:07.220526 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-07 00:02:07.220535 | orchestrator | + attachment = (known after apply) 2026-03-07 00:02:07.220539 | orchestrator | + availability_zone = "nova" 2026-03-07 00:02:07.220543 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.220547 | orchestrator | + metadata = (known after apply) 2026-03-07 00:02:07.220550 | orchestrator | + name = "testbed-volume-7-node-4" 2026-03-07 00:02:07.220554 | orchestrator | + region = (known after apply) 2026-03-07 00:02:07.220558 | orchestrator | + size = 20 2026-03-07 00:02:07.220562 | orchestrator | + volume_retype_policy = "never" 2026-03-07 00:02:07.220566 | orchestrator | + volume_type = "ssd" 2026-03-07 00:02:07.220570 | orchestrator | } 2026-03-07 00:02:07.220626 | orchestrator | 2026-03-07 00:02:07.220637 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-03-07 00:02:07.220641 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-07 00:02:07.220645 | orchestrator | + attachment = (known after apply) 2026-03-07 00:02:07.220649 | orchestrator | + availability_zone = "nova" 2026-03-07 00:02:07.220653 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.220656 | orchestrator | + metadata = (known after apply) 2026-03-07 00:02:07.220660 | orchestrator | + name = "testbed-volume-8-node-5" 2026-03-07 00:02:07.220664 | orchestrator | + region = (known after apply) 2026-03-07 00:02:07.220668 | orchestrator | + size = 20 2026-03-07 00:02:07.220672 | orchestrator | + volume_retype_policy = "never" 2026-03-07 00:02:07.220675 | orchestrator | + volume_type = "ssd" 2026-03-07 00:02:07.220679 | orchestrator | } 2026-03-07 00:02:07.220871 | orchestrator | 2026-03-07 00:02:07.220883 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-03-07 00:02:07.220887 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-03-07 00:02:07.220891 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-07 00:02:07.220895 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-07 00:02:07.220899 | orchestrator | + all_metadata = (known after apply) 2026-03-07 00:02:07.220903 | orchestrator | + all_tags = (known after apply) 2026-03-07 00:02:07.220906 | orchestrator | + availability_zone = "nova" 2026-03-07 00:02:07.220910 | orchestrator | + config_drive = true 2026-03-07 00:02:07.220914 | orchestrator | + created = (known after apply) 2026-03-07 00:02:07.220918 | orchestrator | + flavor_id = (known after apply) 2026-03-07 00:02:07.220922 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-03-07 00:02:07.220926 | orchestrator | + force_delete = false 2026-03-07 00:02:07.220929 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-07 00:02:07.220933 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.220937 | orchestrator | + image_id = (known after apply) 2026-03-07 00:02:07.220941 | orchestrator | + image_name = (known after apply) 2026-03-07 00:02:07.220945 | orchestrator | + key_pair = "testbed" 2026-03-07 00:02:07.220948 | orchestrator | + name = "testbed-manager" 2026-03-07 00:02:07.220952 | orchestrator | + power_state = "active" 2026-03-07 00:02:07.220956 | orchestrator | + region = (known after apply) 2026-03-07 00:02:07.220960 | orchestrator | + security_groups = (known after apply) 2026-03-07 00:02:07.220963 | orchestrator | + stop_before_destroy = false 2026-03-07 00:02:07.220967 | orchestrator | + updated = (known after apply) 2026-03-07 00:02:07.220971 | orchestrator | + user_data = (sensitive value) 2026-03-07 00:02:07.220975 | orchestrator | 2026-03-07 00:02:07.220979 | orchestrator | + block_device { 2026-03-07 00:02:07.220983 | orchestrator | + boot_index = 0 2026-03-07 00:02:07.220987 | orchestrator | + delete_on_termination = false 2026-03-07 00:02:07.221010 | orchestrator | + destination_type = "volume" 2026-03-07 00:02:07.221016 | orchestrator | + multiattach = false 2026-03-07 00:02:07.221020 | orchestrator | + source_type = "volume" 2026-03-07 00:02:07.221024 | orchestrator | + uuid = (known after apply) 2026-03-07 00:02:07.221032 | orchestrator | } 2026-03-07 00:02:07.221036 | orchestrator | 2026-03-07 00:02:07.221040 | orchestrator | + network { 2026-03-07 00:02:07.221044 | orchestrator | + access_network = false 2026-03-07 00:02:07.221048 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-07 00:02:07.221052 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-07 00:02:07.221056 | orchestrator | + mac = (known after apply) 2026-03-07 00:02:07.221059 | orchestrator | + name = (known after apply) 2026-03-07 00:02:07.221063 | orchestrator | + port = (known after apply) 2026-03-07 00:02:07.221067 | orchestrator | + uuid = (known after apply) 2026-03-07 00:02:07.221071 | orchestrator | } 2026-03-07 00:02:07.221075 | orchestrator | } 2026-03-07 00:02:07.221262 | orchestrator | 2026-03-07 00:02:07.221274 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-03-07 00:02:07.221279 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-07 00:02:07.221283 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-07 00:02:07.221286 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-07 00:02:07.221290 | orchestrator | + all_metadata = (known after apply) 2026-03-07 00:02:07.221294 | orchestrator | + all_tags = (known after apply) 2026-03-07 00:02:07.221298 | orchestrator | + availability_zone = "nova" 2026-03-07 00:02:07.221301 | orchestrator | + config_drive = true 2026-03-07 00:02:07.221305 | orchestrator | + created = (known after apply) 2026-03-07 00:02:07.221309 | orchestrator | + flavor_id = (known after apply) 2026-03-07 00:02:07.221313 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-07 00:02:07.221317 | orchestrator | + force_delete = false 2026-03-07 00:02:07.221320 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-07 00:02:07.221324 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.221328 | orchestrator | + image_id = (known after apply) 2026-03-07 00:02:07.221332 | orchestrator | + image_name = (known after apply) 2026-03-07 00:02:07.221335 | orchestrator | + key_pair = "testbed" 2026-03-07 00:02:07.221339 | orchestrator | + name = "testbed-node-0" 2026-03-07 00:02:07.221343 | orchestrator | + power_state = "active" 2026-03-07 00:02:07.221347 | orchestrator | + region = (known after apply) 2026-03-07 00:02:07.221350 | orchestrator | + security_groups = (known after apply) 2026-03-07 00:02:07.221354 | orchestrator | + stop_before_destroy = false 2026-03-07 00:02:07.221358 | orchestrator | + updated = (known after apply) 2026-03-07 00:02:07.221361 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-07 00:02:07.221365 | orchestrator | 2026-03-07 00:02:07.221369 | orchestrator | + block_device { 2026-03-07 00:02:07.221373 | orchestrator | + boot_index = 0 2026-03-07 00:02:07.221377 | orchestrator | + delete_on_termination = false 2026-03-07 00:02:07.221380 | orchestrator | + destination_type = "volume" 2026-03-07 00:02:07.221384 | orchestrator | + multiattach = false 2026-03-07 00:02:07.221388 | orchestrator | + source_type = "volume" 2026-03-07 00:02:07.221391 | orchestrator | + uuid = (known after apply) 2026-03-07 00:02:07.221395 | orchestrator | } 2026-03-07 00:02:07.221399 | orchestrator | 2026-03-07 00:02:07.221403 | orchestrator | + network { 2026-03-07 00:02:07.221406 | orchestrator | + access_network = false 2026-03-07 00:02:07.221410 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-07 00:02:07.221414 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-07 00:02:07.221418 | orchestrator | + mac = (known after apply) 2026-03-07 00:02:07.221422 | orchestrator | + name = (known after apply) 2026-03-07 00:02:07.221425 | orchestrator | + port = (known after apply) 2026-03-07 00:02:07.221429 | orchestrator | + uuid = (known after apply) 2026-03-07 00:02:07.221433 | orchestrator | } 2026-03-07 00:02:07.221437 | orchestrator | } 2026-03-07 00:02:07.221616 | orchestrator | 2026-03-07 00:02:07.221628 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-03-07 00:02:07.221632 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-07 00:02:07.221636 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-07 00:02:07.221643 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-07 00:02:07.221647 | orchestrator | + all_metadata = (known after apply) 2026-03-07 00:02:07.221651 | orchestrator | + all_tags = (known after apply) 2026-03-07 00:02:07.221654 | orchestrator | + availability_zone = "nova" 2026-03-07 00:02:07.221658 | orchestrator | + config_drive = true 2026-03-07 00:02:07.221662 | orchestrator | + created = (known after apply) 2026-03-07 00:02:07.221665 | orchestrator | + flavor_id = (known after apply) 2026-03-07 00:02:07.221669 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-07 00:02:07.221673 | orchestrator | + force_delete = false 2026-03-07 00:02:07.221677 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-07 00:02:07.221680 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.221684 | orchestrator | + image_id = (known after apply) 2026-03-07 00:02:07.221688 | orchestrator | + image_name = (known after apply) 2026-03-07 00:02:07.221691 | orchestrator | + key_pair = "testbed" 2026-03-07 00:02:07.221695 | orchestrator | + name = "testbed-node-1" 2026-03-07 00:02:07.221699 | orchestrator | + power_state = "active" 2026-03-07 00:02:07.221703 | orchestrator | + region = (known after apply) 2026-03-07 00:02:07.221707 | orchestrator | + security_groups = (known after apply) 2026-03-07 00:02:07.221710 | orchestrator | + stop_before_destroy = false 2026-03-07 00:02:07.221714 | orchestrator | + updated = (known after apply) 2026-03-07 00:02:07.221718 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-07 00:02:07.221722 | orchestrator | 2026-03-07 00:02:07.221726 | orchestrator | + block_device { 2026-03-07 00:02:07.221729 | orchestrator | + boot_index = 0 2026-03-07 00:02:07.221733 | orchestrator | + delete_on_termination = false 2026-03-07 00:02:07.221737 | orchestrator | + destination_type = "volume" 2026-03-07 00:02:07.221740 | orchestrator | + multiattach = false 2026-03-07 00:02:07.221744 | orchestrator | + source_type = "volume" 2026-03-07 00:02:07.221748 | orchestrator | + uuid = (known after apply) 2026-03-07 00:02:07.221752 | orchestrator | } 2026-03-07 00:02:07.221755 | orchestrator | 2026-03-07 00:02:07.221759 | orchestrator | + network { 2026-03-07 00:02:07.221763 | orchestrator | + access_network = false 2026-03-07 00:02:07.221766 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-07 00:02:07.221770 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-07 00:02:07.221774 | orchestrator | + mac = (known after apply) 2026-03-07 00:02:07.221778 | orchestrator | + name = (known after apply) 2026-03-07 00:02:07.221781 | orchestrator | + port = (known after apply) 2026-03-07 00:02:07.221785 | orchestrator | + uuid = (known after apply) 2026-03-07 00:02:07.221789 | orchestrator | } 2026-03-07 00:02:07.221793 | orchestrator | } 2026-03-07 00:02:07.221969 | orchestrator | 2026-03-07 00:02:07.221981 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-03-07 00:02:07.221986 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-07 00:02:07.222003 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-07 00:02:07.222007 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-07 00:02:07.222026 | orchestrator | + all_metadata = (known after apply) 2026-03-07 00:02:07.222032 | orchestrator | + all_tags = (known after apply) 2026-03-07 00:02:07.222054 | orchestrator | + availability_zone = "nova" 2026-03-07 00:02:07.222060 | orchestrator | + config_drive = true 2026-03-07 00:02:07.222066 | orchestrator | + created = (known after apply) 2026-03-07 00:02:07.222072 | orchestrator | + flavor_id = (known after apply) 2026-03-07 00:02:07.222079 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-07 00:02:07.222082 | orchestrator | + force_delete = false 2026-03-07 00:02:07.222086 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-07 00:02:07.222090 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.222094 | orchestrator | + image_id = (known after apply) 2026-03-07 00:02:07.222102 | orchestrator | + image_name = (known after apply) 2026-03-07 00:02:07.222105 | orchestrator | + key_pair = "testbed" 2026-03-07 00:02:07.222109 | orchestrator | + name = "testbed-node-2" 2026-03-07 00:02:07.222113 | orchestrator | + power_state = "active" 2026-03-07 00:02:07.222117 | orchestrator | + region = (known after apply) 2026-03-07 00:02:07.222120 | orchestrator | + security_groups = (known after apply) 2026-03-07 00:02:07.222124 | orchestrator | + stop_before_destroy = false 2026-03-07 00:02:07.222128 | orchestrator | + updated = (known after apply) 2026-03-07 00:02:07.222132 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-07 00:02:07.222135 | orchestrator | 2026-03-07 00:02:07.222139 | orchestrator | + block_device { 2026-03-07 00:02:07.222143 | orchestrator | + boot_index = 0 2026-03-07 00:02:07.222147 | orchestrator | + delete_on_termination = false 2026-03-07 00:02:07.222151 | orchestrator | + destination_type = "volume" 2026-03-07 00:02:07.222154 | orchestrator | + multiattach = false 2026-03-07 00:02:07.222158 | orchestrator | + source_type = "volume" 2026-03-07 00:02:07.222162 | orchestrator | + uuid = (known after apply) 2026-03-07 00:02:07.222167 | orchestrator | } 2026-03-07 00:02:07.222172 | orchestrator | 2026-03-07 00:02:07.222178 | orchestrator | + network { 2026-03-07 00:02:07.222184 | orchestrator | + access_network = false 2026-03-07 00:02:07.222190 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-07 00:02:07.222196 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-07 00:02:07.222202 | orchestrator | + mac = (known after apply) 2026-03-07 00:02:07.222209 | orchestrator | + name = (known after apply) 2026-03-07 00:02:07.222213 | orchestrator | + port = (known after apply) 2026-03-07 00:02:07.222217 | orchestrator | + uuid = (known after apply) 2026-03-07 00:02:07.222220 | orchestrator | } 2026-03-07 00:02:07.222224 | orchestrator | } 2026-03-07 00:02:07.222411 | orchestrator | 2026-03-07 00:02:07.222424 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-03-07 00:02:07.222428 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-07 00:02:07.222432 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-07 00:02:07.222435 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-07 00:02:07.222440 | orchestrator | + all_metadata = (known after apply) 2026-03-07 00:02:07.222443 | orchestrator | + all_tags = (known after apply) 2026-03-07 00:02:07.222447 | orchestrator | + availability_zone = "nova" 2026-03-07 00:02:07.222451 | orchestrator | + config_drive = true 2026-03-07 00:02:07.222454 | orchestrator | + created = (known after apply) 2026-03-07 00:02:07.222458 | orchestrator | + flavor_id = (known after apply) 2026-03-07 00:02:07.222462 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-07 00:02:07.222465 | orchestrator | + force_delete = false 2026-03-07 00:02:07.222469 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-07 00:02:07.222473 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.222477 | orchestrator | + image_id = (known after apply) 2026-03-07 00:02:07.222480 | orchestrator | + image_name = (known after apply) 2026-03-07 00:02:07.222484 | orchestrator | + key_pair = "testbed" 2026-03-07 00:02:07.222488 | orchestrator | + name = "testbed-node-3" 2026-03-07 00:02:07.222491 | orchestrator | + power_state = "active" 2026-03-07 00:02:07.222495 | orchestrator | + region = (known after apply) 2026-03-07 00:02:07.222499 | orchestrator | + security_groups = (known after apply) 2026-03-07 00:02:07.222502 | orchestrator | + stop_before_destroy = false 2026-03-07 00:02:07.222506 | orchestrator | + updated = (known after apply) 2026-03-07 00:02:07.222510 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-07 00:02:07.222514 | orchestrator | 2026-03-07 00:02:07.222517 | orchestrator | + block_device { 2026-03-07 00:02:07.222524 | orchestrator | + boot_index = 0 2026-03-07 00:02:07.222528 | orchestrator | + delete_on_termination = false 2026-03-07 00:02:07.222532 | orchestrator | + destination_type = "volume" 2026-03-07 00:02:07.222540 | orchestrator | + multiattach = false 2026-03-07 00:02:07.222543 | orchestrator | + source_type = "volume" 2026-03-07 00:02:07.222547 | orchestrator | + uuid = (known after apply) 2026-03-07 00:02:07.222551 | orchestrator | } 2026-03-07 00:02:07.222555 | orchestrator | 2026-03-07 00:02:07.222558 | orchestrator | + network { 2026-03-07 00:02:07.222562 | orchestrator | + access_network = false 2026-03-07 00:02:07.222566 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-07 00:02:07.222569 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-07 00:02:07.222573 | orchestrator | + mac = (known after apply) 2026-03-07 00:02:07.222577 | orchestrator | + name = (known after apply) 2026-03-07 00:02:07.222581 | orchestrator | + port = (known after apply) 2026-03-07 00:02:07.222584 | orchestrator | + uuid = (known after apply) 2026-03-07 00:02:07.222588 | orchestrator | } 2026-03-07 00:02:07.222592 | orchestrator | } 2026-03-07 00:02:07.222767 | orchestrator | 2026-03-07 00:02:07.222778 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-03-07 00:02:07.222783 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-07 00:02:07.222787 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-07 00:02:07.222791 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-07 00:02:07.222794 | orchestrator | + all_metadata = (known after apply) 2026-03-07 00:02:07.222798 | orchestrator | + all_tags = (known after apply) 2026-03-07 00:02:07.222802 | orchestrator | + availability_zone = "nova" 2026-03-07 00:02:07.222806 | orchestrator | + config_drive = true 2026-03-07 00:02:07.222809 | orchestrator | + created = (known after apply) 2026-03-07 00:02:07.222813 | orchestrator | + flavor_id = (known after apply) 2026-03-07 00:02:07.222817 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-07 00:02:07.222821 | orchestrator | + force_delete = false 2026-03-07 00:02:07.222825 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-07 00:02:07.222828 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.222832 | orchestrator | + image_id = (known after apply) 2026-03-07 00:02:07.222836 | orchestrator | + image_name = (known after apply) 2026-03-07 00:02:07.222840 | orchestrator | + key_pair = "testbed" 2026-03-07 00:02:07.222843 | orchestrator | + name = "testbed-node-4" 2026-03-07 00:02:07.222847 | orchestrator | + power_state = "active" 2026-03-07 00:02:07.222851 | orchestrator | + region = (known after apply) 2026-03-07 00:02:07.222855 | orchestrator | + security_groups = (known after apply) 2026-03-07 00:02:07.222858 | orchestrator | + stop_before_destroy = false 2026-03-07 00:02:07.222862 | orchestrator | + updated = (known after apply) 2026-03-07 00:02:07.222866 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-07 00:02:07.222870 | orchestrator | 2026-03-07 00:02:07.222873 | orchestrator | + block_device { 2026-03-07 00:02:07.222877 | orchestrator | + boot_index = 0 2026-03-07 00:02:07.222881 | orchestrator | + delete_on_termination = false 2026-03-07 00:02:07.222885 | orchestrator | + destination_type = "volume" 2026-03-07 00:02:07.222888 | orchestrator | + multiattach = false 2026-03-07 00:02:07.222892 | orchestrator | + source_type = "volume" 2026-03-07 00:02:07.222896 | orchestrator | + uuid = (known after apply) 2026-03-07 00:02:07.222900 | orchestrator | } 2026-03-07 00:02:07.222903 | orchestrator | 2026-03-07 00:02:07.222907 | orchestrator | + network { 2026-03-07 00:02:07.222911 | orchestrator | + access_network = false 2026-03-07 00:02:07.222915 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-07 00:02:07.222918 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-07 00:02:07.222922 | orchestrator | + mac = (known after apply) 2026-03-07 00:02:07.222926 | orchestrator | + name = (known after apply) 2026-03-07 00:02:07.222930 | orchestrator | + port = (known after apply) 2026-03-07 00:02:07.222933 | orchestrator | + uuid = (known after apply) 2026-03-07 00:02:07.222937 | orchestrator | } 2026-03-07 00:02:07.222941 | orchestrator | } 2026-03-07 00:02:07.223174 | orchestrator | 2026-03-07 00:02:07.223190 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-03-07 00:02:07.223195 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-07 00:02:07.223199 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-07 00:02:07.223203 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-07 00:02:07.223207 | orchestrator | + all_metadata = (known after apply) 2026-03-07 00:02:07.223210 | orchestrator | + all_tags = (known after apply) 2026-03-07 00:02:07.223214 | orchestrator | + availability_zone = "nova" 2026-03-07 00:02:07.223218 | orchestrator | + config_drive = true 2026-03-07 00:02:07.223222 | orchestrator | + created = (known after apply) 2026-03-07 00:02:07.223226 | orchestrator | + flavor_id = (known after apply) 2026-03-07 00:02:07.223229 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-07 00:02:07.223233 | orchestrator | + force_delete = false 2026-03-07 00:02:07.223241 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-07 00:02:07.223245 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.223249 | orchestrator | + image_id = (known after apply) 2026-03-07 00:02:07.223253 | orchestrator | + image_name = (known after apply) 2026-03-07 00:02:07.223256 | orchestrator | + key_pair = "testbed" 2026-03-07 00:02:07.223262 | orchestrator | + name = "testbed-node-5" 2026-03-07 00:02:07.223269 | orchestrator | + power_state = "active" 2026-03-07 00:02:07.223275 | orchestrator | + region = (known after apply) 2026-03-07 00:02:07.223280 | orchestrator | + security_groups = (known after apply) 2026-03-07 00:02:07.223284 | orchestrator | + stop_before_destroy = false 2026-03-07 00:02:07.223288 | orchestrator | + updated = (known after apply) 2026-03-07 00:02:07.223292 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-07 00:02:07.223296 | orchestrator | 2026-03-07 00:02:07.223300 | orchestrator | + block_device { 2026-03-07 00:02:07.223303 | orchestrator | + boot_index = 0 2026-03-07 00:02:07.223307 | orchestrator | + delete_on_termination = false 2026-03-07 00:02:07.223311 | orchestrator | + destination_type = "volume" 2026-03-07 00:02:07.223315 | orchestrator | + multiattach = false 2026-03-07 00:02:07.223318 | orchestrator | + source_type = "volume" 2026-03-07 00:02:07.223322 | orchestrator | + uuid = (known after apply) 2026-03-07 00:02:07.223326 | orchestrator | } 2026-03-07 00:02:07.223330 | orchestrator | 2026-03-07 00:02:07.223333 | orchestrator | + network { 2026-03-07 00:02:07.223337 | orchestrator | + access_network = false 2026-03-07 00:02:07.223341 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-07 00:02:07.223345 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-07 00:02:07.223349 | orchestrator | + mac = (known after apply) 2026-03-07 00:02:07.223352 | orchestrator | + name = (known after apply) 2026-03-07 00:02:07.223356 | orchestrator | + port = (known after apply) 2026-03-07 00:02:07.223360 | orchestrator | + uuid = (known after apply) 2026-03-07 00:02:07.223364 | orchestrator | } 2026-03-07 00:02:07.223368 | orchestrator | } 2026-03-07 00:02:07.223413 | orchestrator | 2026-03-07 00:02:07.223424 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-03-07 00:02:07.223428 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-03-07 00:02:07.223433 | orchestrator | + fingerprint = (known after apply) 2026-03-07 00:02:07.223439 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.223446 | orchestrator | + name = "testbed" 2026-03-07 00:02:07.223453 | orchestrator | + private_key = (sensitive value) 2026-03-07 00:02:07.223457 | orchestrator | + public_key = (known after apply) 2026-03-07 00:02:07.223461 | orchestrator | + region = (known after apply) 2026-03-07 00:02:07.223465 | orchestrator | + user_id = (known after apply) 2026-03-07 00:02:07.223468 | orchestrator | } 2026-03-07 00:02:07.223507 | orchestrator | 2026-03-07 00:02:07.223518 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-03-07 00:02:07.223523 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-07 00:02:07.223531 | orchestrator | + device = (known after apply) 2026-03-07 00:02:07.223535 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.223539 | orchestrator | + instance_id = (known after apply) 2026-03-07 00:02:07.223543 | orchestrator | + region = (known after apply) 2026-03-07 00:02:07.223546 | orchestrator | + volume_id = (known after apply) 2026-03-07 00:02:07.223550 | orchestrator | } 2026-03-07 00:02:07.223585 | orchestrator | 2026-03-07 00:02:07.223596 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-03-07 00:02:07.223600 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-07 00:02:07.223604 | orchestrator | + device = (known after apply) 2026-03-07 00:02:07.223608 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.223612 | orchestrator | + instance_id = (known after apply) 2026-03-07 00:02:07.223616 | orchestrator | + region = (known after apply) 2026-03-07 00:02:07.223619 | orchestrator | + volume_id = (known after apply) 2026-03-07 00:02:07.223623 | orchestrator | } 2026-03-07 00:02:07.223660 | orchestrator | 2026-03-07 00:02:07.223671 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-03-07 00:02:07.223676 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-07 00:02:07.223680 | orchestrator | + device = (known after apply) 2026-03-07 00:02:07.223683 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.223687 | orchestrator | + instance_id = (known after apply) 2026-03-07 00:02:07.223691 | orchestrator | + region = (known after apply) 2026-03-07 00:02:07.223695 | orchestrator | + volume_id = (known after apply) 2026-03-07 00:02:07.223698 | orchestrator | } 2026-03-07 00:02:07.223733 | orchestrator | 2026-03-07 00:02:07.223744 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-03-07 00:02:07.223748 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-07 00:02:07.223752 | orchestrator | + device = (known after apply) 2026-03-07 00:02:07.223756 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.223760 | orchestrator | + instance_id = (known after apply) 2026-03-07 00:02:07.223763 | orchestrator | + region = (known after apply) 2026-03-07 00:02:07.223767 | orchestrator | + volume_id = (known after apply) 2026-03-07 00:02:07.223771 | orchestrator | } 2026-03-07 00:02:07.223804 | orchestrator | 2026-03-07 00:02:07.223814 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-03-07 00:02:07.223819 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-07 00:02:07.223822 | orchestrator | + device = (known after apply) 2026-03-07 00:02:07.223826 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.223830 | orchestrator | + instance_id = (known after apply) 2026-03-07 00:02:07.223837 | orchestrator | + region = (known after apply) 2026-03-07 00:02:07.223841 | orchestrator | + volume_id = (known after apply) 2026-03-07 00:02:07.223844 | orchestrator | } 2026-03-07 00:02:07.223879 | orchestrator | 2026-03-07 00:02:07.223890 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-03-07 00:02:07.223894 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-07 00:02:07.223898 | orchestrator | + device = (known after apply) 2026-03-07 00:02:07.223901 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.223905 | orchestrator | + instance_id = (known after apply) 2026-03-07 00:02:07.223909 | orchestrator | + region = (known after apply) 2026-03-07 00:02:07.223913 | orchestrator | + volume_id = (known after apply) 2026-03-07 00:02:07.223916 | orchestrator | } 2026-03-07 00:02:07.223958 | orchestrator | 2026-03-07 00:02:07.223969 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-03-07 00:02:07.223973 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-07 00:02:07.223977 | orchestrator | + device = (known after apply) 2026-03-07 00:02:07.223981 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.223984 | orchestrator | + instance_id = (known after apply) 2026-03-07 00:02:07.224002 | orchestrator | + region = (known after apply) 2026-03-07 00:02:07.224011 | orchestrator | + volume_id = (known after apply) 2026-03-07 00:02:07.224015 | orchestrator | } 2026-03-07 00:02:07.224053 | orchestrator | 2026-03-07 00:02:07.224063 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-03-07 00:02:07.224068 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-07 00:02:07.224072 | orchestrator | + device = (known after apply) 2026-03-07 00:02:07.224076 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.224079 | orchestrator | + instance_id = (known after apply) 2026-03-07 00:02:07.224083 | orchestrator | + region = (known after apply) 2026-03-07 00:02:07.224087 | orchestrator | + volume_id = (known after apply) 2026-03-07 00:02:07.224091 | orchestrator | } 2026-03-07 00:02:07.224125 | orchestrator | 2026-03-07 00:02:07.224136 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-03-07 00:02:07.224140 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-07 00:02:07.224144 | orchestrator | + device = (known after apply) 2026-03-07 00:02:07.224148 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.224151 | orchestrator | + instance_id = (known after apply) 2026-03-07 00:02:07.224155 | orchestrator | + region = (known after apply) 2026-03-07 00:02:07.224159 | orchestrator | + volume_id = (known after apply) 2026-03-07 00:02:07.224163 | orchestrator | } 2026-03-07 00:02:07.224198 | orchestrator | 2026-03-07 00:02:07.224209 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-03-07 00:02:07.224214 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-03-07 00:02:07.224217 | orchestrator | + fixed_ip = (known after apply) 2026-03-07 00:02:07.224221 | orchestrator | + floating_ip = (known after apply) 2026-03-07 00:02:07.224225 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.224229 | orchestrator | + port_id = (known after apply) 2026-03-07 00:02:07.224233 | orchestrator | + region = (known after apply) 2026-03-07 00:02:07.224236 | orchestrator | } 2026-03-07 00:02:07.224294 | orchestrator | 2026-03-07 00:02:07.224305 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-03-07 00:02:07.224309 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-03-07 00:02:07.224313 | orchestrator | + address = (known after apply) 2026-03-07 00:02:07.224317 | orchestrator | + all_tags = (known after apply) 2026-03-07 00:02:07.224320 | orchestrator | + dns_domain = (known after apply) 2026-03-07 00:02:07.224324 | orchestrator | + dns_name = (known after apply) 2026-03-07 00:02:07.224328 | orchestrator | + fixed_ip = (known after apply) 2026-03-07 00:02:07.224332 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.224335 | orchestrator | + pool = "public" 2026-03-07 00:02:07.224339 | orchestrator | + port_id = (known after apply) 2026-03-07 00:02:07.224343 | orchestrator | + region = (known after apply) 2026-03-07 00:02:07.224347 | orchestrator | + subnet_id = (known after apply) 2026-03-07 00:02:07.224350 | orchestrator | + tenant_id = (known after apply) 2026-03-07 00:02:07.224354 | orchestrator | } 2026-03-07 00:02:07.224438 | orchestrator | 2026-03-07 00:02:07.224449 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-03-07 00:02:07.224453 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-03-07 00:02:07.224457 | orchestrator | + admin_state_up = (known after apply) 2026-03-07 00:02:07.224461 | orchestrator | + all_tags = (known after apply) 2026-03-07 00:02:07.224465 | orchestrator | + availability_zone_hints = [ 2026-03-07 00:02:07.224469 | orchestrator | + "nova", 2026-03-07 00:02:07.224473 | orchestrator | ] 2026-03-07 00:02:07.224477 | orchestrator | + dns_domain = (known after apply) 2026-03-07 00:02:07.224480 | orchestrator | + external = (known after apply) 2026-03-07 00:02:07.224484 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.224488 | orchestrator | + mtu = (known after apply) 2026-03-07 00:02:07.224492 | orchestrator | + name = "net-testbed-management" 2026-03-07 00:02:07.224495 | orchestrator | + port_security_enabled = (known after apply) 2026-03-07 00:02:07.224503 | orchestrator | + qos_policy_id = (known after apply) 2026-03-07 00:02:07.224507 | orchestrator | + region = (known after apply) 2026-03-07 00:02:07.224511 | orchestrator | + shared = (known after apply) 2026-03-07 00:02:07.224514 | orchestrator | + tenant_id = (known after apply) 2026-03-07 00:02:07.224518 | orchestrator | + transparent_vlan = (known after apply) 2026-03-07 00:02:07.224522 | orchestrator | 2026-03-07 00:02:07.224526 | orchestrator | + segments (known after apply) 2026-03-07 00:02:07.224530 | orchestrator | } 2026-03-07 00:02:07.224650 | orchestrator | 2026-03-07 00:02:07.224661 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-03-07 00:02:07.224665 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-03-07 00:02:07.224669 | orchestrator | + admin_state_up = (known after apply) 2026-03-07 00:02:07.224673 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-07 00:02:07.224677 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-07 00:02:07.224683 | orchestrator | + all_tags = (known after apply) 2026-03-07 00:02:07.224687 | orchestrator | + device_id = (known after apply) 2026-03-07 00:02:07.224691 | orchestrator | + device_owner = (known after apply) 2026-03-07 00:02:07.224695 | orchestrator | + dns_assignment = (known after apply) 2026-03-07 00:02:07.224698 | orchestrator | + dns_name = (known after apply) 2026-03-07 00:02:07.224702 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.224706 | orchestrator | + mac_address = (known after apply) 2026-03-07 00:02:07.224710 | orchestrator | + network_id = (known after apply) 2026-03-07 00:02:07.224713 | orchestrator | + port_security_enabled = (known after apply) 2026-03-07 00:02:07.224717 | orchestrator | + qos_policy_id = (known after apply) 2026-03-07 00:02:07.224721 | orchestrator | + region = (known after apply) 2026-03-07 00:02:07.224725 | orchestrator | + security_group_ids = (known after apply) 2026-03-07 00:02:07.224728 | orchestrator | + tenant_id = (known after apply) 2026-03-07 00:02:07.224732 | orchestrator | 2026-03-07 00:02:07.224736 | orchestrator | + allowed_address_pairs { 2026-03-07 00:02:07.224740 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-07 00:02:07.224744 | orchestrator | } 2026-03-07 00:02:07.224747 | orchestrator | 2026-03-07 00:02:07.224751 | orchestrator | + binding (known after apply) 2026-03-07 00:02:07.224755 | orchestrator | 2026-03-07 00:02:07.224759 | orchestrator | + fixed_ip { 2026-03-07 00:02:07.224763 | orchestrator | + ip_address = "192.168.16.5" 2026-03-07 00:02:07.224766 | orchestrator | + subnet_id = (known after apply) 2026-03-07 00:02:07.224770 | orchestrator | } 2026-03-07 00:02:07.224774 | orchestrator | } 2026-03-07 00:02:07.224913 | orchestrator | 2026-03-07 00:02:07.224925 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-03-07 00:02:07.224929 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-07 00:02:07.224933 | orchestrator | + admin_state_up = (known after apply) 2026-03-07 00:02:07.224936 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-07 00:02:07.224940 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-07 00:02:07.224944 | orchestrator | + all_tags = (known after apply) 2026-03-07 00:02:07.224948 | orchestrator | + device_id = (known after apply) 2026-03-07 00:02:07.224951 | orchestrator | + device_owner = (known after apply) 2026-03-07 00:02:07.224955 | orchestrator | + dns_assignment = (known after apply) 2026-03-07 00:02:07.224961 | orchestrator | + dns_name = (known after apply) 2026-03-07 00:02:07.224967 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.224973 | orchestrator | + mac_address = (known after apply) 2026-03-07 00:02:07.224979 | orchestrator | + network_id = (known after apply) 2026-03-07 00:02:07.224986 | orchestrator | + port_security_enabled = (known after apply) 2026-03-07 00:02:07.225007 | orchestrator | + qos_policy_id = (known after apply) 2026-03-07 00:02:07.225011 | orchestrator | + region = (known after apply) 2026-03-07 00:02:07.225019 | orchestrator | + security_group_ids = (known after apply) 2026-03-07 00:02:07.225023 | orchestrator | + tenant_id = (known after apply) 2026-03-07 00:02:07.225027 | orchestrator | 2026-03-07 00:02:07.225031 | orchestrator | + allowed_address_pairs { 2026-03-07 00:02:07.225035 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-07 00:02:07.225039 | orchestrator | } 2026-03-07 00:02:07.225043 | orchestrator | + allowed_address_pairs { 2026-03-07 00:02:07.225047 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-07 00:02:07.225050 | orchestrator | } 2026-03-07 00:02:07.225054 | orchestrator | + allowed_address_pairs { 2026-03-07 00:02:07.225058 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-07 00:02:07.225062 | orchestrator | } 2026-03-07 00:02:07.225066 | orchestrator | 2026-03-07 00:02:07.225070 | orchestrator | + binding (known after apply) 2026-03-07 00:02:07.225073 | orchestrator | 2026-03-07 00:02:07.225077 | orchestrator | + fixed_ip { 2026-03-07 00:02:07.225081 | orchestrator | + ip_address = "192.168.16.10" 2026-03-07 00:02:07.225085 | orchestrator | + subnet_id = (known after apply) 2026-03-07 00:02:07.225089 | orchestrator | } 2026-03-07 00:02:07.225093 | orchestrator | } 2026-03-07 00:02:07.225233 | orchestrator | 2026-03-07 00:02:07.225245 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-03-07 00:02:07.225249 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-07 00:02:07.225253 | orchestrator | + admin_state_up = (known after apply) 2026-03-07 00:02:07.225257 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-07 00:02:07.225261 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-07 00:02:07.225264 | orchestrator | + all_tags = (known after apply) 2026-03-07 00:02:07.225268 | orchestrator | + device_id = (known after apply) 2026-03-07 00:02:07.225272 | orchestrator | + device_owner = (known after apply) 2026-03-07 00:02:07.225276 | orchestrator | + dns_assignment = (known after apply) 2026-03-07 00:02:07.225280 | orchestrator | + dns_name = (known after apply) 2026-03-07 00:02:07.225283 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.225287 | orchestrator | + mac_address = (known after apply) 2026-03-07 00:02:07.225291 | orchestrator | + network_id = (known after apply) 2026-03-07 00:02:07.225295 | orchestrator | + port_security_enabled = (known after apply) 2026-03-07 00:02:07.225298 | orchestrator | + qos_policy_id = (known after apply) 2026-03-07 00:02:07.225302 | orchestrator | + region = (known after apply) 2026-03-07 00:02:07.225306 | orchestrator | + security_group_ids = (known after apply) 2026-03-07 00:02:07.225310 | orchestrator | + tenant_id = (known after apply) 2026-03-07 00:02:07.225314 | orchestrator | 2026-03-07 00:02:07.225317 | orchestrator | + allowed_address_pairs { 2026-03-07 00:02:07.225321 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-07 00:02:07.225325 | orchestrator | } 2026-03-07 00:02:07.225329 | orchestrator | + allowed_address_pairs { 2026-03-07 00:02:07.225333 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-07 00:02:07.225336 | orchestrator | } 2026-03-07 00:02:07.225340 | orchestrator | + allowed_address_pairs { 2026-03-07 00:02:07.225344 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-07 00:02:07.225348 | orchestrator | } 2026-03-07 00:02:07.225351 | orchestrator | 2026-03-07 00:02:07.225355 | orchestrator | + binding (known after apply) 2026-03-07 00:02:07.225359 | orchestrator | 2026-03-07 00:02:07.225363 | orchestrator | + fixed_ip { 2026-03-07 00:02:07.225366 | orchestrator | + ip_address = "192.168.16.11" 2026-03-07 00:02:07.225370 | orchestrator | + subnet_id = (known after apply) 2026-03-07 00:02:07.225374 | orchestrator | } 2026-03-07 00:02:07.225378 | orchestrator | } 2026-03-07 00:02:07.225505 | orchestrator | 2026-03-07 00:02:07.225516 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-03-07 00:02:07.225520 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-07 00:02:07.225524 | orchestrator | + admin_state_up = (known after apply) 2026-03-07 00:02:07.225528 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-07 00:02:07.225531 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-07 00:02:07.225535 | orchestrator | + all_tags = (known after apply) 2026-03-07 00:02:07.225544 | orchestrator | + device_id = (known after apply) 2026-03-07 00:02:07.225548 | orchestrator | + device_owner = (known after apply) 2026-03-07 00:02:07.225551 | orchestrator | + dns_assignment = (known after apply) 2026-03-07 00:02:07.225555 | orchestrator | + dns_name = (known after apply) 2026-03-07 00:02:07.225562 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.225566 | orchestrator | + mac_address = (known after apply) 2026-03-07 00:02:07.225570 | orchestrator | + network_id = (known after apply) 2026-03-07 00:02:07.225573 | orchestrator | + port_security_enabled = (known after apply) 2026-03-07 00:02:07.225577 | orchestrator | + qos_policy_id = (known after apply) 2026-03-07 00:02:07.225581 | orchestrator | + region = (known after apply) 2026-03-07 00:02:07.225585 | orchestrator | + security_group_ids = (known after apply) 2026-03-07 00:02:07.225589 | orchestrator | + tenant_id = (known after apply) 2026-03-07 00:02:07.225593 | orchestrator | 2026-03-07 00:02:07.225596 | orchestrator | + allowed_address_pairs { 2026-03-07 00:02:07.225600 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-07 00:02:07.225604 | orchestrator | } 2026-03-07 00:02:07.225608 | orchestrator | + allowed_address_pairs { 2026-03-07 00:02:07.225612 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-07 00:02:07.225616 | orchestrator | } 2026-03-07 00:02:07.225619 | orchestrator | + allowed_address_pairs { 2026-03-07 00:02:07.225623 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-07 00:02:07.225627 | orchestrator | } 2026-03-07 00:02:07.225631 | orchestrator | 2026-03-07 00:02:07.225635 | orchestrator | + binding (known after apply) 2026-03-07 00:02:07.225638 | orchestrator | 2026-03-07 00:02:07.225642 | orchestrator | + fixed_ip { 2026-03-07 00:02:07.225646 | orchestrator | + ip_address = "192.168.16.12" 2026-03-07 00:02:07.225650 | orchestrator | + subnet_id = (known after apply) 2026-03-07 00:02:07.225653 | orchestrator | } 2026-03-07 00:02:07.225657 | orchestrator | } 2026-03-07 00:02:07.225792 | orchestrator | 2026-03-07 00:02:07.225804 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-03-07 00:02:07.225808 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-07 00:02:07.225812 | orchestrator | + admin_state_up = (known after apply) 2026-03-07 00:02:07.225816 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-07 00:02:07.225819 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-07 00:02:07.225823 | orchestrator | + all_tags = (known after apply) 2026-03-07 00:02:07.225827 | orchestrator | + device_id = (known after apply) 2026-03-07 00:02:07.225831 | orchestrator | + device_owner = (known after apply) 2026-03-07 00:02:07.225835 | orchestrator | + dns_assignment = (known after apply) 2026-03-07 00:02:07.225839 | orchestrator | + dns_name = (known after apply) 2026-03-07 00:02:07.225842 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.225846 | orchestrator | + mac_address = (known after apply) 2026-03-07 00:02:07.225850 | orchestrator | + network_id = (known after apply) 2026-03-07 00:02:07.225854 | orchestrator | + port_security_enabled = (known after apply) 2026-03-07 00:02:07.225857 | orchestrator | + qos_policy_id = (known after apply) 2026-03-07 00:02:07.225861 | orchestrator | + region = (known after apply) 2026-03-07 00:02:07.225865 | orchestrator | + security_group_ids = (known after apply) 2026-03-07 00:02:07.225869 | orchestrator | + tenant_id = (known after apply) 2026-03-07 00:02:07.225873 | orchestrator | 2026-03-07 00:02:07.225876 | orchestrator | + allowed_address_pairs { 2026-03-07 00:02:07.225880 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-07 00:02:07.225884 | orchestrator | } 2026-03-07 00:02:07.225888 | orchestrator | + allowed_address_pairs { 2026-03-07 00:02:07.225892 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-07 00:02:07.225895 | orchestrator | } 2026-03-07 00:02:07.225899 | orchestrator | + allowed_address_pairs { 2026-03-07 00:02:07.225903 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-07 00:02:07.225907 | orchestrator | } 2026-03-07 00:02:07.225910 | orchestrator | 2026-03-07 00:02:07.225928 | orchestrator | + binding (known after apply) 2026-03-07 00:02:07.225932 | orchestrator | 2026-03-07 00:02:07.225935 | orchestrator | + fixed_ip { 2026-03-07 00:02:07.225939 | orchestrator | + ip_address = "192.168.16.13" 2026-03-07 00:02:07.225943 | orchestrator | + subnet_id = (known after apply) 2026-03-07 00:02:07.225947 | orchestrator | } 2026-03-07 00:02:07.225951 | orchestrator | } 2026-03-07 00:02:07.226137 | orchestrator | 2026-03-07 00:02:07.226152 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-03-07 00:02:07.226156 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-07 00:02:07.226160 | orchestrator | + admin_state_up = (known after apply) 2026-03-07 00:02:07.226164 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-07 00:02:07.226168 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-07 00:02:07.226171 | orchestrator | + all_tags = (known after apply) 2026-03-07 00:02:07.226175 | orchestrator | + device_id = (known after apply) 2026-03-07 00:02:07.226179 | orchestrator | + device_owner = (known after apply) 2026-03-07 00:02:07.226183 | orchestrator | + dns_assignment = (known after apply) 2026-03-07 00:02:07.226187 | orchestrator | + dns_name = (known after apply) 2026-03-07 00:02:07.226190 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.226194 | orchestrator | + mac_address = (known after apply) 2026-03-07 00:02:07.226198 | orchestrator | + network_id = (known after apply) 2026-03-07 00:02:07.226202 | orchestrator | + port_security_enabled = (known after apply) 2026-03-07 00:02:07.226206 | orchestrator | + qos_policy_id = (known after apply) 2026-03-07 00:02:07.226209 | orchestrator | + region = (known after apply) 2026-03-07 00:02:07.226213 | orchestrator | + security_group_ids = (known after apply) 2026-03-07 00:02:07.226217 | orchestrator | + tenant_id = (known after apply) 2026-03-07 00:02:07.226222 | orchestrator | 2026-03-07 00:02:07.226226 | orchestrator | + allowed_address_pairs { 2026-03-07 00:02:07.226230 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-07 00:02:07.226233 | orchestrator | } 2026-03-07 00:02:07.226237 | orchestrator | + allowed_address_pairs { 2026-03-07 00:02:07.226241 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-07 00:02:07.226245 | orchestrator | } 2026-03-07 00:02:07.226249 | orchestrator | + allowed_address_pairs { 2026-03-07 00:02:07.226252 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-07 00:02:07.226256 | orchestrator | } 2026-03-07 00:02:07.226260 | orchestrator | 2026-03-07 00:02:07.226264 | orchestrator | + binding (known after apply) 2026-03-07 00:02:07.226267 | orchestrator | 2026-03-07 00:02:07.226271 | orchestrator | + fixed_ip { 2026-03-07 00:02:07.226275 | orchestrator | + ip_address = "192.168.16.14" 2026-03-07 00:02:07.226279 | orchestrator | + subnet_id = (known after apply) 2026-03-07 00:02:07.226282 | orchestrator | } 2026-03-07 00:02:07.226286 | orchestrator | } 2026-03-07 00:02:07.226417 | orchestrator | 2026-03-07 00:02:07.226428 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-03-07 00:02:07.226432 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-07 00:02:07.226436 | orchestrator | + admin_state_up = (known after apply) 2026-03-07 00:02:07.226440 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-07 00:02:07.226444 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-07 00:02:07.226448 | orchestrator | + all_tags = (known after apply) 2026-03-07 00:02:07.226451 | orchestrator | + device_id = (known after apply) 2026-03-07 00:02:07.226455 | orchestrator | + device_owner = (known after apply) 2026-03-07 00:02:07.226459 | orchestrator | + dns_assignment = (known after apply) 2026-03-07 00:02:07.226463 | orchestrator | + dns_name = (known after apply) 2026-03-07 00:02:07.226466 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.226470 | orchestrator | + mac_address = (known after apply) 2026-03-07 00:02:07.226474 | orchestrator | + network_id = (known after apply) 2026-03-07 00:02:07.226478 | orchestrator | + port_security_enabled = (known after apply) 2026-03-07 00:02:07.226482 | orchestrator | + qos_policy_id = (known after apply) 2026-03-07 00:02:07.226491 | orchestrator | + region = (known after apply) 2026-03-07 00:02:07.226495 | orchestrator | + security_group_ids = (known after apply) 2026-03-07 00:02:07.226498 | orchestrator | + tenant_id = (known after apply) 2026-03-07 00:02:07.226502 | orchestrator | 2026-03-07 00:02:07.226506 | orchestrator | + allowed_address_pairs { 2026-03-07 00:02:07.226510 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-07 00:02:07.226513 | orchestrator | } 2026-03-07 00:02:07.226517 | orchestrator | + allowed_address_pairs { 2026-03-07 00:02:07.226521 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-07 00:02:07.226525 | orchestrator | } 2026-03-07 00:02:07.226529 | orchestrator | + allowed_address_pairs { 2026-03-07 00:02:07.226532 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-07 00:02:07.226536 | orchestrator | } 2026-03-07 00:02:07.226540 | orchestrator | 2026-03-07 00:02:07.226547 | orchestrator | + binding (known after apply) 2026-03-07 00:02:07.226551 | orchestrator | 2026-03-07 00:02:07.226554 | orchestrator | + fixed_ip { 2026-03-07 00:02:07.226558 | orchestrator | + ip_address = "192.168.16.15" 2026-03-07 00:02:07.226562 | orchestrator | + subnet_id = (known after apply) 2026-03-07 00:02:07.226566 | orchestrator | } 2026-03-07 00:02:07.226569 | orchestrator | } 2026-03-07 00:02:07.226612 | orchestrator | 2026-03-07 00:02:07.226622 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-03-07 00:02:07.226627 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-03-07 00:02:07.226631 | orchestrator | + force_destroy = false 2026-03-07 00:02:07.226635 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.226638 | orchestrator | + port_id = (known after apply) 2026-03-07 00:02:07.226642 | orchestrator | + region = (known after apply) 2026-03-07 00:02:07.226646 | orchestrator | + router_id = (known after apply) 2026-03-07 00:02:07.226650 | orchestrator | + subnet_id = (known after apply) 2026-03-07 00:02:07.226653 | orchestrator | } 2026-03-07 00:02:07.226734 | orchestrator | 2026-03-07 00:02:07.226745 | orchestrator | # openstack_networking_router_v2.router will be created 2026-03-07 00:02:07.226749 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-03-07 00:02:07.226753 | orchestrator | + admin_state_up = (known after apply) 2026-03-07 00:02:07.226757 | orchestrator | + all_tags = (known after apply) 2026-03-07 00:02:07.226761 | orchestrator | + availability_zone_hints = [ 2026-03-07 00:02:07.226764 | orchestrator | + "nova", 2026-03-07 00:02:07.226768 | orchestrator | ] 2026-03-07 00:02:07.226772 | orchestrator | + distributed = (known after apply) 2026-03-07 00:02:07.226776 | orchestrator | + enable_snat = (known after apply) 2026-03-07 00:02:07.226780 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-03-07 00:02:07.226783 | orchestrator | + external_qos_policy_id = (known after apply) 2026-03-07 00:02:07.226787 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.226791 | orchestrator | + name = "testbed" 2026-03-07 00:02:07.226795 | orchestrator | + region = (known after apply) 2026-03-07 00:02:07.226799 | orchestrator | + tenant_id = (known after apply) 2026-03-07 00:02:07.226802 | orchestrator | 2026-03-07 00:02:07.226806 | orchestrator | + external_fixed_ip (known after apply) 2026-03-07 00:02:07.226810 | orchestrator | } 2026-03-07 00:02:07.226887 | orchestrator | 2026-03-07 00:02:07.226899 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-03-07 00:02:07.226904 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-03-07 00:02:07.226908 | orchestrator | + description = "ssh" 2026-03-07 00:02:07.226912 | orchestrator | + direction = "ingress" 2026-03-07 00:02:07.226916 | orchestrator | + ethertype = "IPv4" 2026-03-07 00:02:07.226919 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.226923 | orchestrator | + port_range_max = 22 2026-03-07 00:02:07.226927 | orchestrator | + port_range_min = 22 2026-03-07 00:02:07.226931 | orchestrator | + protocol = "tcp" 2026-03-07 00:02:07.226935 | orchestrator | + region = (known after apply) 2026-03-07 00:02:07.226943 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-07 00:02:07.226947 | orchestrator | + remote_group_id = (known after apply) 2026-03-07 00:02:07.226951 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-07 00:02:07.226955 | orchestrator | + security_group_id = (known after apply) 2026-03-07 00:02:07.226958 | orchestrator | + tenant_id = (known after apply) 2026-03-07 00:02:07.226962 | orchestrator | } 2026-03-07 00:02:07.227071 | orchestrator | 2026-03-07 00:02:07.227084 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-03-07 00:02:07.227089 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-03-07 00:02:07.227093 | orchestrator | + description = "wireguard" 2026-03-07 00:02:07.227096 | orchestrator | + direction = "ingress" 2026-03-07 00:02:07.227100 | orchestrator | + ethertype = "IPv4" 2026-03-07 00:02:07.227104 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.227108 | orchestrator | + port_range_max = 51820 2026-03-07 00:02:07.227111 | orchestrator | + port_range_min = 51820 2026-03-07 00:02:07.227115 | orchestrator | + protocol = "udp" 2026-03-07 00:02:07.227119 | orchestrator | + region = (known after apply) 2026-03-07 00:02:07.227123 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-07 00:02:07.227127 | orchestrator | + remote_group_id = (known after apply) 2026-03-07 00:02:07.227130 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-07 00:02:07.227134 | orchestrator | + security_group_id = (known after apply) 2026-03-07 00:02:07.227138 | orchestrator | + tenant_id = (known after apply) 2026-03-07 00:02:07.227141 | orchestrator | } 2026-03-07 00:02:07.227206 | orchestrator | 2026-03-07 00:02:07.227218 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-03-07 00:02:07.227222 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-03-07 00:02:07.227226 | orchestrator | + direction = "ingress" 2026-03-07 00:02:07.227230 | orchestrator | + ethertype = "IPv4" 2026-03-07 00:02:07.227234 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.227237 | orchestrator | + protocol = "tcp" 2026-03-07 00:02:07.227241 | orchestrator | + region = (known after apply) 2026-03-07 00:02:07.227245 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-07 00:02:07.227249 | orchestrator | + remote_group_id = (known after apply) 2026-03-07 00:02:07.227252 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-07 00:02:07.227256 | orchestrator | + security_group_id = (known after apply) 2026-03-07 00:02:07.227260 | orchestrator | + tenant_id = (known after apply) 2026-03-07 00:02:07.227264 | orchestrator | } 2026-03-07 00:02:07.227321 | orchestrator | 2026-03-07 00:02:07.227332 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-03-07 00:02:07.227337 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-03-07 00:02:07.227341 | orchestrator | + direction = "ingress" 2026-03-07 00:02:07.227344 | orchestrator | + ethertype = "IPv4" 2026-03-07 00:02:07.227348 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.227352 | orchestrator | + protocol = "udp" 2026-03-07 00:02:07.227356 | orchestrator | + region = (known after apply) 2026-03-07 00:02:07.227360 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-07 00:02:07.227363 | orchestrator | + remote_group_id = (known after apply) 2026-03-07 00:02:07.227367 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-07 00:02:07.227371 | orchestrator | + security_group_id = (known after apply) 2026-03-07 00:02:07.227375 | orchestrator | + tenant_id = (known after apply) 2026-03-07 00:02:07.227378 | orchestrator | } 2026-03-07 00:02:07.227438 | orchestrator | 2026-03-07 00:02:07.227454 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-03-07 00:02:07.227466 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-03-07 00:02:07.227473 | orchestrator | + direction = "ingress" 2026-03-07 00:02:07.227477 | orchestrator | + ethertype = "IPv4" 2026-03-07 00:02:07.227481 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.227485 | orchestrator | + protocol = "icmp" 2026-03-07 00:02:07.227489 | orchestrator | + region = (known after apply) 2026-03-07 00:02:07.227492 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-07 00:02:07.227496 | orchestrator | + remote_group_id = (known after apply) 2026-03-07 00:02:07.227500 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-07 00:02:07.227504 | orchestrator | + security_group_id = (known after apply) 2026-03-07 00:02:07.227508 | orchestrator | + tenant_id = (known after apply) 2026-03-07 00:02:07.227511 | orchestrator | } 2026-03-07 00:02:07.227578 | orchestrator | 2026-03-07 00:02:07.227589 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-03-07 00:02:07.227594 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-03-07 00:02:07.227598 | orchestrator | + direction = "ingress" 2026-03-07 00:02:07.227602 | orchestrator | + ethertype = "IPv4" 2026-03-07 00:02:07.227605 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.227609 | orchestrator | + protocol = "tcp" 2026-03-07 00:02:07.227613 | orchestrator | + region = (known after apply) 2026-03-07 00:02:07.227617 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-07 00:02:07.227624 | orchestrator | + remote_group_id = (known after apply) 2026-03-07 00:02:07.227628 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-07 00:02:07.227632 | orchestrator | + security_group_id = (known after apply) 2026-03-07 00:02:07.227636 | orchestrator | + tenant_id = (known after apply) 2026-03-07 00:02:07.227640 | orchestrator | } 2026-03-07 00:02:07.227703 | orchestrator | 2026-03-07 00:02:07.227714 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-03-07 00:02:07.227718 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-03-07 00:02:07.227722 | orchestrator | + direction = "ingress" 2026-03-07 00:02:07.227726 | orchestrator | + ethertype = "IPv4" 2026-03-07 00:02:07.227730 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.227734 | orchestrator | + protocol = "udp" 2026-03-07 00:02:07.227737 | orchestrator | + region = (known after apply) 2026-03-07 00:02:07.227741 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-07 00:02:07.227745 | orchestrator | + remote_group_id = (known after apply) 2026-03-07 00:02:07.227749 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-07 00:02:07.227752 | orchestrator | + security_group_id = (known after apply) 2026-03-07 00:02:07.227756 | orchestrator | + tenant_id = (known after apply) 2026-03-07 00:02:07.227760 | orchestrator | } 2026-03-07 00:02:07.227821 | orchestrator | 2026-03-07 00:02:07.227832 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-03-07 00:02:07.227836 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-03-07 00:02:07.227840 | orchestrator | + direction = "ingress" 2026-03-07 00:02:07.227846 | orchestrator | + ethertype = "IPv4" 2026-03-07 00:02:07.227850 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.227854 | orchestrator | + protocol = "icmp" 2026-03-07 00:02:07.227858 | orchestrator | + region = (known after apply) 2026-03-07 00:02:07.227862 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-07 00:02:07.227865 | orchestrator | + remote_group_id = (known after apply) 2026-03-07 00:02:07.227869 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-07 00:02:07.227873 | orchestrator | + security_group_id = (known after apply) 2026-03-07 00:02:07.227877 | orchestrator | + tenant_id = (known after apply) 2026-03-07 00:02:07.227887 | orchestrator | } 2026-03-07 00:02:07.227953 | orchestrator | 2026-03-07 00:02:07.227964 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-03-07 00:02:07.227969 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-03-07 00:02:07.227972 | orchestrator | + description = "vrrp" 2026-03-07 00:02:07.227976 | orchestrator | + direction = "ingress" 2026-03-07 00:02:07.227980 | orchestrator | + ethertype = "IPv4" 2026-03-07 00:02:07.227984 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.227988 | orchestrator | + protocol = "112" 2026-03-07 00:02:07.228005 | orchestrator | + region = (known after apply) 2026-03-07 00:02:07.228009 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-07 00:02:07.228013 | orchestrator | + remote_group_id = (known after apply) 2026-03-07 00:02:07.228017 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-07 00:02:07.228021 | orchestrator | + security_group_id = (known after apply) 2026-03-07 00:02:07.228025 | orchestrator | + tenant_id = (known after apply) 2026-03-07 00:02:07.228029 | orchestrator | } 2026-03-07 00:02:07.228080 | orchestrator | 2026-03-07 00:02:07.228092 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-03-07 00:02:07.228096 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-03-07 00:02:07.228100 | orchestrator | + all_tags = (known after apply) 2026-03-07 00:02:07.228104 | orchestrator | + description = "management security group" 2026-03-07 00:02:07.228108 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.228111 | orchestrator | + name = "testbed-management" 2026-03-07 00:02:07.228115 | orchestrator | + region = (known after apply) 2026-03-07 00:02:07.228119 | orchestrator | + stateful = (known after apply) 2026-03-07 00:02:07.228123 | orchestrator | + tenant_id = (known after apply) 2026-03-07 00:02:07.228126 | orchestrator | } 2026-03-07 00:02:07.228176 | orchestrator | 2026-03-07 00:02:07.228187 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-03-07 00:02:07.228192 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-03-07 00:02:07.228195 | orchestrator | + all_tags = (known after apply) 2026-03-07 00:02:07.228199 | orchestrator | + description = "node security group" 2026-03-07 00:02:07.228203 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.228207 | orchestrator | + name = "testbed-node" 2026-03-07 00:02:07.228210 | orchestrator | + region = (known after apply) 2026-03-07 00:02:07.228214 | orchestrator | + stateful = (known after apply) 2026-03-07 00:02:07.228218 | orchestrator | + tenant_id = (known after apply) 2026-03-07 00:02:07.228221 | orchestrator | } 2026-03-07 00:02:07.228327 | orchestrator | 2026-03-07 00:02:07.228338 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-03-07 00:02:07.228342 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-03-07 00:02:07.228346 | orchestrator | + all_tags = (known after apply) 2026-03-07 00:02:07.228350 | orchestrator | + cidr = "192.168.16.0/20" 2026-03-07 00:02:07.228353 | orchestrator | + dns_nameservers = [ 2026-03-07 00:02:07.228357 | orchestrator | + "8.8.8.8", 2026-03-07 00:02:07.228361 | orchestrator | + "9.9.9.9", 2026-03-07 00:02:07.228365 | orchestrator | ] 2026-03-07 00:02:07.228369 | orchestrator | + enable_dhcp = true 2026-03-07 00:02:07.228373 | orchestrator | + gateway_ip = (known after apply) 2026-03-07 00:02:07.228376 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.228380 | orchestrator | + ip_version = 4 2026-03-07 00:02:07.228384 | orchestrator | + ipv6_address_mode = (known after apply) 2026-03-07 00:02:07.228388 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-03-07 00:02:07.228391 | orchestrator | + name = "subnet-testbed-management" 2026-03-07 00:02:07.228395 | orchestrator | + network_id = (known after apply) 2026-03-07 00:02:07.228399 | orchestrator | + no_gateway = false 2026-03-07 00:02:07.228403 | orchestrator | + region = (known after apply) 2026-03-07 00:02:07.228406 | orchestrator | + service_types = (known after apply) 2026-03-07 00:02:07.228414 | orchestrator | + tenant_id = (known after apply) 2026-03-07 00:02:07.228418 | orchestrator | 2026-03-07 00:02:07.228421 | orchestrator | + allocation_pool { 2026-03-07 00:02:07.228425 | orchestrator | + end = "192.168.31.250" 2026-03-07 00:02:07.228429 | orchestrator | + start = "192.168.31.200" 2026-03-07 00:02:07.228433 | orchestrator | } 2026-03-07 00:02:07.228437 | orchestrator | } 2026-03-07 00:02:07.228468 | orchestrator | 2026-03-07 00:02:07.228480 | orchestrator | # terraform_data.image will be created 2026-03-07 00:02:07.228484 | orchestrator | + resource "terraform_data" "image" { 2026-03-07 00:02:07.228488 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.228492 | orchestrator | + input = "Ubuntu 24.04" 2026-03-07 00:02:07.228495 | orchestrator | + output = (known after apply) 2026-03-07 00:02:07.228499 | orchestrator | } 2026-03-07 00:02:07.228530 | orchestrator | 2026-03-07 00:02:07.228541 | orchestrator | # terraform_data.image_node will be created 2026-03-07 00:02:07.228545 | orchestrator | + resource "terraform_data" "image_node" { 2026-03-07 00:02:07.228549 | orchestrator | + id = (known after apply) 2026-03-07 00:02:07.228553 | orchestrator | + input = "Ubuntu 24.04" 2026-03-07 00:02:07.228556 | orchestrator | + output = (known after apply) 2026-03-07 00:02:07.228560 | orchestrator | } 2026-03-07 00:02:07.228575 | orchestrator | 2026-03-07 00:02:07.228580 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-03-07 00:02:07.228591 | orchestrator | 2026-03-07 00:02:07.228595 | orchestrator | Changes to Outputs: 2026-03-07 00:02:07.228605 | orchestrator | + manager_address = (sensitive value) 2026-03-07 00:02:07.228609 | orchestrator | + private_key = (sensitive value) 2026-03-07 00:02:07.488918 | orchestrator | terraform_data.image: Creating... 2026-03-07 00:02:07.489031 | orchestrator | terraform_data.image: Creation complete after 0s [id=b2fd360d-898b-f685-8433-eb4ad67bf189] 2026-03-07 00:02:07.495393 | orchestrator | terraform_data.image_node: Creating... 2026-03-07 00:02:07.496757 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=05fc09bc-0c94-8646-ba09-129586a4f45d] 2026-03-07 00:02:07.500357 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-03-07 00:02:07.508397 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-03-07 00:02:07.511323 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-03-07 00:02:07.511502 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-03-07 00:02:07.512698 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-03-07 00:02:07.512858 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-03-07 00:02:07.516573 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-03-07 00:02:07.517342 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-03-07 00:02:07.525934 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-03-07 00:02:07.529405 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-03-07 00:02:08.000400 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2026-03-07 00:02:08.006349 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-03-07 00:02:08.013843 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-07 00:02:08.018360 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-03-07 00:02:08.030545 | orchestrator | data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-07 00:02:08.047800 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-03-07 00:02:08.594043 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=7ffcf10f-e428-4cbc-9ffa-7b711f72dd1c] 2026-03-07 00:02:08.605885 | orchestrator | local_file.id_rsa_pub: Creating... 2026-03-07 00:02:08.622828 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=fdd80a45fbb7dd834f8cc20a9e17d60c84da5f9c] 2026-03-07 00:02:08.634354 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-03-07 00:02:08.647135 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=1d878b25192eb887d46ea5ef0a26d8c011e3cbbf] 2026-03-07 00:02:08.661288 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-03-07 00:02:11.120959 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=81cf8acf-ab0c-4c96-8ca2-b696b28e7835] 2026-03-07 00:02:11.128355 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=3c3014e6-40a5-4340-97e1-b63d744f1dc5] 2026-03-07 00:02:11.128402 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-03-07 00:02:11.134648 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-03-07 00:02:11.164567 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=beed34ce-a5a1-4e0c-b446-348e6964ce68] 2026-03-07 00:02:11.169506 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-03-07 00:02:11.176574 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=fa6ba2e8-2fa1-4496-a2d0-ef7dd6ca5952] 2026-03-07 00:02:11.185820 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=d799a894-5671-421e-939f-d4a49d05b62b] 2026-03-07 00:02:11.191248 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-03-07 00:02:11.195942 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-03-07 00:02:11.216368 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=af4ba259-cb6f-4fcf-8c2a-944dae969065] 2026-03-07 00:02:11.222818 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-03-07 00:02:11.247913 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=4fdacbb0-1c31-482c-97c6-063a331da0fc] 2026-03-07 00:02:11.261206 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-03-07 00:02:11.266720 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=99318194-5870-4346-ba42-ca8c5b557f89] 2026-03-07 00:02:11.294986 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=fc232e22-cf7b-4f47-aee0-37a45820ed30] 2026-03-07 00:02:12.026392 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=797fca32-d50c-40a1-babd-cf40b6b01cdf] 2026-03-07 00:02:12.108400 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=a92c02a8-185d-4059-8fc1-1084b2d87bc2] 2026-03-07 00:02:12.113241 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-03-07 00:02:14.516706 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=71c8fc84-aa22-48e4-a4b3-817a97778daa] 2026-03-07 00:02:14.548344 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=e807b00a-8b7b-48ce-9460-0e3636b06250] 2026-03-07 00:02:14.570330 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=d38c21a8-dda9-4fc2-b1ab-cbf9e01f4351] 2026-03-07 00:02:14.591637 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=ea170c0f-a027-4120-b295-61114d65555d] 2026-03-07 00:02:14.605823 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=32f6366e-c903-428d-822c-f2184feb77bd] 2026-03-07 00:02:14.624225 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=9e6318f9-11cb-4ed8-b0fb-e89153e65f2e] 2026-03-07 00:02:15.174806 | orchestrator | openstack_networking_router_v2.router: Creation complete after 3s [id=5b933bbb-0ab9-4750-9a44-0850d46afab7] 2026-03-07 00:02:15.178895 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-03-07 00:02:15.179653 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-03-07 00:02:15.180732 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-03-07 00:02:15.406910 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=f7153e3b-8298-4799-84a5-9653385c479b] 2026-03-07 00:02:15.408216 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=69ef1e1d-eb76-4736-a457-5562e33a43f1] 2026-03-07 00:02:15.412091 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-03-07 00:02:15.412407 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-03-07 00:02:15.427702 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-03-07 00:02:15.427740 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-03-07 00:02:15.427746 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-03-07 00:02:15.427750 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-03-07 00:02:15.428651 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-03-07 00:02:15.428688 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-03-07 00:02:15.428694 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-03-07 00:02:15.578343 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=29a0e02f-ec10-4580-b367-62b8878b32bd] 2026-03-07 00:02:15.581462 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-03-07 00:02:15.645317 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=3f2032cd-5b55-4a76-a467-50a5f54b5047] 2026-03-07 00:02:15.653666 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-03-07 00:02:15.774360 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=3c91737e-4634-43fc-8e36-9d3858ad82c5] 2026-03-07 00:02:15.783181 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-03-07 00:02:15.800303 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=46769473-4f3b-4076-bdc3-6d5285fc6ecf] 2026-03-07 00:02:15.811617 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-03-07 00:02:16.040790 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=517496d6-cb82-416e-8049-f306b126a308] 2026-03-07 00:02:16.054149 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-03-07 00:02:16.078885 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=5e052913-c1f2-4160-8cbf-07a9e1ab8668] 2026-03-07 00:02:16.084236 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-03-07 00:02:16.129820 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=b0f7ba67-7424-480a-bef2-5330f2696355] 2026-03-07 00:02:16.141753 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-03-07 00:02:16.207106 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=c56dad63-4ac9-4dc6-843d-c538e487bfd1] 2026-03-07 00:02:16.522819 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=ae7ed1a7-b8d8-47ca-b6c4-07537877bce4] 2026-03-07 00:02:16.785775 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=1798e8d5-02ba-471e-928b-6aca9f99bf6b] 2026-03-07 00:02:16.862544 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 2s [id=a6f00b37-6082-4935-a896-7dd3772c0ab2] 2026-03-07 00:02:16.904133 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=24d35aaa-ba2e-4439-8e47-094e79bf64cc] 2026-03-07 00:02:16.943913 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=a8812964-635b-49f5-a01c-6f9ead8debe1] 2026-03-07 00:02:17.220056 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=0700992d-ab5b-4918-b7e4-bbfac5d9f2ce] 2026-03-07 00:02:17.225738 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 2s [id=1edf6843-2c4d-42d6-8582-66bec0390ebc] 2026-03-07 00:02:17.330289 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=060a29b5-5fec-48c5-93f3-da874c5e75f6] 2026-03-07 00:02:18.241650 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=4969b7d8-9ce7-46cf-9a52-02fbaba2d7c6] 2026-03-07 00:02:18.285350 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-03-07 00:02:18.294744 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-03-07 00:02:18.333367 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-03-07 00:02:18.334631 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-03-07 00:02:18.356953 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-03-07 00:02:18.357048 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-03-07 00:02:18.363853 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-03-07 00:02:19.954831 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=9657f963-5739-4bfb-b428-3ce807b1b12c] 2026-03-07 00:02:19.966831 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-03-07 00:02:19.968927 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-03-07 00:02:19.970815 | orchestrator | local_file.inventory: Creating... 2026-03-07 00:02:19.974226 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=57db01c4c43b7ac6548e11ab5dc029e491594aed] 2026-03-07 00:02:19.980656 | orchestrator | local_file.inventory: Creation complete after 0s [id=cccce069542cc08c3d134ac5278e8b10c9f2b75f] 2026-03-07 00:02:20.949988 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=9657f963-5739-4bfb-b428-3ce807b1b12c] 2026-03-07 00:02:28.302322 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-03-07 00:02:28.335153 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-03-07 00:02:28.336173 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-03-07 00:02:28.353519 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-03-07 00:02:28.357665 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-03-07 00:02:28.364870 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-03-07 00:02:38.311100 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-03-07 00:02:38.336287 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-03-07 00:02:38.336388 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-03-07 00:02:38.354579 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-03-07 00:02:38.358725 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-03-07 00:02:38.364988 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-03-07 00:02:39.055674 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 21s [id=5556ae5e-08cc-4878-ac8d-e26725401539] 2026-03-07 00:02:48.336697 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-03-07 00:02:48.336896 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-03-07 00:02:48.355230 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-03-07 00:02:48.359453 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-03-07 00:02:48.365884 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-03-07 00:02:49.169993 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=6d5e35c1-8f60-44e7-997f-1ccce4a793a2] 2026-03-07 00:02:49.182860 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 31s [id=6a849208-271b-4b01-b5e1-f38330ab0252] 2026-03-07 00:02:49.240899 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 31s [id=6cfc3e8b-d769-4448-bc03-750d65b46b85] 2026-03-07 00:02:49.254287 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=7cedcdd2-60f1-4e9b-a755-5712d1c10742] 2026-03-07 00:02:49.292001 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=4b9ee479-0b64-4b68-a538-320d759534e8] 2026-03-07 00:02:49.312776 | orchestrator | null_resource.node_semaphore: Creating... 2026-03-07 00:02:49.317794 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=5607258178371588562] 2026-03-07 00:02:49.321653 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-03-07 00:02:49.321932 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-03-07 00:02:49.322762 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-03-07 00:02:49.330634 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-03-07 00:02:49.330695 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-03-07 00:02:49.331589 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-03-07 00:02:49.334245 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-03-07 00:02:49.337925 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-03-07 00:02:49.347097 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-03-07 00:02:49.355263 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-03-07 00:02:52.822617 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 4s [id=6cfc3e8b-d769-4448-bc03-750d65b46b85/4fdacbb0-1c31-482c-97c6-063a331da0fc] 2026-03-07 00:02:52.864155 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 4s [id=7cedcdd2-60f1-4e9b-a755-5712d1c10742/99318194-5870-4346-ba42-ca8c5b557f89] 2026-03-07 00:02:53.045644 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 4s [id=5556ae5e-08cc-4878-ac8d-e26725401539/fa6ba2e8-2fa1-4496-a2d0-ef7dd6ca5952] 2026-03-07 00:02:58.902637 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 10s [id=5556ae5e-08cc-4878-ac8d-e26725401539/81cf8acf-ab0c-4c96-8ca2-b696b28e7835] 2026-03-07 00:02:58.960971 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 10s [id=6cfc3e8b-d769-4448-bc03-750d65b46b85/3c3014e6-40a5-4340-97e1-b63d744f1dc5] 2026-03-07 00:02:58.987709 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 10s [id=7cedcdd2-60f1-4e9b-a755-5712d1c10742/beed34ce-a5a1-4e0c-b446-348e6964ce68] 2026-03-07 00:02:59.134757 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 10s [id=7cedcdd2-60f1-4e9b-a755-5712d1c10742/d799a894-5671-421e-939f-d4a49d05b62b] 2026-03-07 00:02:59.281194 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 10s [id=6cfc3e8b-d769-4448-bc03-750d65b46b85/af4ba259-cb6f-4fcf-8c2a-944dae969065] 2026-03-07 00:02:59.331120 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 10s [id=5556ae5e-08cc-4878-ac8d-e26725401539/fc232e22-cf7b-4f47-aee0-37a45820ed30] 2026-03-07 00:02:59.349076 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-03-07 00:03:09.349669 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-03-07 00:03:19.350925 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [30s elapsed] 2026-03-07 00:03:19.921269 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 31s [id=5e8119fc-ba6a-4efb-ac8d-860fc4075b61] 2026-03-07 00:03:20.034137 | orchestrator | 2026-03-07 00:03:20.034214 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-03-07 00:03:20.034224 | orchestrator | 2026-03-07 00:03:20.034232 | orchestrator | Outputs: 2026-03-07 00:03:20.034239 | orchestrator | 2026-03-07 00:03:20.034246 | orchestrator | manager_address = 2026-03-07 00:03:20.034254 | orchestrator | private_key = 2026-03-07 00:03:20.234227 | orchestrator | ok: Runtime: 0:01:18.117425 2026-03-07 00:03:20.269727 | 2026-03-07 00:03:20.269859 | TASK [Fetch manager address] 2026-03-07 00:03:20.762048 | orchestrator | ok 2026-03-07 00:03:20.770450 | 2026-03-07 00:03:20.770563 | TASK [Set manager_host address] 2026-03-07 00:03:20.860786 | orchestrator | ok 2026-03-07 00:03:20.870455 | 2026-03-07 00:03:20.870594 | LOOP [Update ansible collections] 2026-03-07 00:03:22.042568 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-07 00:03:22.042907 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-07 00:03:22.042961 | orchestrator | Starting galaxy collection install process 2026-03-07 00:03:22.042986 | orchestrator | Process install dependency map 2026-03-07 00:03:22.043009 | orchestrator | Starting collection install process 2026-03-07 00:03:22.043030 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons' 2026-03-07 00:03:22.043057 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons 2026-03-07 00:03:22.043089 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-03-07 00:03:22.043144 | orchestrator | ok: Item: commons Runtime: 0:00:00.839354 2026-03-07 00:03:23.405820 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-07 00:03:23.405974 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-07 00:03:23.406008 | orchestrator | Starting galaxy collection install process 2026-03-07 00:03:23.406032 | orchestrator | Process install dependency map 2026-03-07 00:03:23.406053 | orchestrator | Starting collection install process 2026-03-07 00:03:23.406073 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services' 2026-03-07 00:03:23.406095 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services 2026-03-07 00:03:23.406116 | orchestrator | osism.services:999.0.0 was installed successfully 2026-03-07 00:03:23.406150 | orchestrator | ok: Item: services Runtime: 0:00:01.080314 2026-03-07 00:03:23.419651 | 2026-03-07 00:03:23.419793 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-07 00:03:34.049304 | orchestrator | ok 2026-03-07 00:03:34.061085 | 2026-03-07 00:03:34.061235 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-07 00:04:34.111838 | orchestrator | ok 2026-03-07 00:04:34.122465 | 2026-03-07 00:04:34.122599 | TASK [Fetch manager ssh hostkey] 2026-03-07 00:04:35.702214 | orchestrator | Output suppressed because no_log was given 2026-03-07 00:04:35.716478 | 2026-03-07 00:04:35.716639 | TASK [Get ssh keypair from terraform environment] 2026-03-07 00:04:36.254000 | orchestrator | ok: Runtime: 0:00:00.008362 2026-03-07 00:04:36.270571 | 2026-03-07 00:04:36.270718 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-07 00:04:36.319099 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-03-07 00:04:36.329269 | 2026-03-07 00:04:36.329388 | TASK [Run manager part 0] 2026-03-07 00:04:37.360587 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-07 00:04:37.410977 | orchestrator | 2026-03-07 00:04:37.411026 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-03-07 00:04:37.411034 | orchestrator | 2026-03-07 00:04:37.411049 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-03-07 00:04:39.592301 | orchestrator | ok: [testbed-manager] 2026-03-07 00:04:39.592341 | orchestrator | 2026-03-07 00:04:39.592362 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-07 00:04:39.592371 | orchestrator | 2026-03-07 00:04:39.592380 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-07 00:04:41.496761 | orchestrator | ok: [testbed-manager] 2026-03-07 00:04:41.496973 | orchestrator | 2026-03-07 00:04:41.496987 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-07 00:04:42.191972 | orchestrator | ok: [testbed-manager] 2026-03-07 00:04:42.192025 | orchestrator | 2026-03-07 00:04:42.192037 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-07 00:04:42.237935 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:04:42.237971 | orchestrator | 2026-03-07 00:04:42.237980 | orchestrator | TASK [Update package cache] **************************************************** 2026-03-07 00:04:42.274585 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:04:42.274631 | orchestrator | 2026-03-07 00:04:42.274642 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-07 00:04:42.305347 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:04:42.305382 | orchestrator | 2026-03-07 00:04:42.305388 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-07 00:04:42.344639 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:04:42.344699 | orchestrator | 2026-03-07 00:04:42.344710 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-07 00:04:42.389068 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:04:42.389136 | orchestrator | 2026-03-07 00:04:42.389144 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-03-07 00:04:42.427631 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:04:42.427695 | orchestrator | 2026-03-07 00:04:42.427716 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-03-07 00:04:42.467774 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:04:42.467820 | orchestrator | 2026-03-07 00:04:42.467830 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-03-07 00:04:44.402691 | orchestrator | changed: [testbed-manager] 2026-03-07 00:04:44.402775 | orchestrator | 2026-03-07 00:04:44.402791 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-03-07 00:07:51.356842 | orchestrator | changed: [testbed-manager] 2026-03-07 00:07:51.356915 | orchestrator | 2026-03-07 00:07:51.356927 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-07 00:09:21.683500 | orchestrator | changed: [testbed-manager] 2026-03-07 00:09:21.683674 | orchestrator | 2026-03-07 00:09:21.683692 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-07 00:09:43.364958 | orchestrator | changed: [testbed-manager] 2026-03-07 00:09:43.365006 | orchestrator | 2026-03-07 00:09:43.365190 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-07 00:09:51.841395 | orchestrator | changed: [testbed-manager] 2026-03-07 00:09:51.841439 | orchestrator | 2026-03-07 00:09:51.841446 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-07 00:09:51.891534 | orchestrator | ok: [testbed-manager] 2026-03-07 00:09:51.891631 | orchestrator | 2026-03-07 00:09:51.891649 | orchestrator | TASK [Get current user] ******************************************************** 2026-03-07 00:09:52.701615 | orchestrator | ok: [testbed-manager] 2026-03-07 00:09:52.701710 | orchestrator | 2026-03-07 00:09:52.701728 | orchestrator | TASK [Create venv directory] *************************************************** 2026-03-07 00:09:53.455541 | orchestrator | changed: [testbed-manager] 2026-03-07 00:09:53.455596 | orchestrator | 2026-03-07 00:09:53.455604 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-03-07 00:09:59.675266 | orchestrator | changed: [testbed-manager] 2026-03-07 00:09:59.675350 | orchestrator | 2026-03-07 00:09:59.675407 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-03-07 00:10:05.728318 | orchestrator | changed: [testbed-manager] 2026-03-07 00:10:05.728404 | orchestrator | 2026-03-07 00:10:05.728422 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-03-07 00:10:08.376027 | orchestrator | changed: [testbed-manager] 2026-03-07 00:10:08.376128 | orchestrator | 2026-03-07 00:10:08.376153 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-03-07 00:10:10.068704 | orchestrator | changed: [testbed-manager] 2026-03-07 00:10:10.068770 | orchestrator | 2026-03-07 00:10:10.068782 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-03-07 00:10:11.133618 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-07 00:10:11.133712 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-07 00:10:11.133726 | orchestrator | 2026-03-07 00:10:11.133737 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-03-07 00:10:11.170974 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-07 00:10:11.171061 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-07 00:10:11.171075 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-07 00:10:11.171088 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-07 00:10:14.534957 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-07 00:10:14.535134 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-07 00:10:14.535171 | orchestrator | 2026-03-07 00:10:14.535192 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-03-07 00:10:15.092378 | orchestrator | changed: [testbed-manager] 2026-03-07 00:10:15.092487 | orchestrator | 2026-03-07 00:10:15.092504 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-03-07 00:13:41.478912 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-03-07 00:13:41.479065 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-03-07 00:13:41.479086 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-03-07 00:13:41.479099 | orchestrator | 2026-03-07 00:13:41.479112 | orchestrator | TASK [Install local collections] *********************************************** 2026-03-07 00:13:43.827399 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-03-07 00:13:43.827504 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-03-07 00:13:43.827521 | orchestrator | 2026-03-07 00:13:43.827533 | orchestrator | PLAY [Create operator user] **************************************************** 2026-03-07 00:13:43.827544 | orchestrator | 2026-03-07 00:13:43.827555 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-07 00:13:45.268276 | orchestrator | ok: [testbed-manager] 2026-03-07 00:13:45.268360 | orchestrator | 2026-03-07 00:13:45.268379 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-07 00:13:45.319568 | orchestrator | ok: [testbed-manager] 2026-03-07 00:13:45.319609 | orchestrator | 2026-03-07 00:13:45.319616 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-07 00:13:45.393918 | orchestrator | ok: [testbed-manager] 2026-03-07 00:13:45.394060 | orchestrator | 2026-03-07 00:13:45.394078 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-07 00:13:46.225291 | orchestrator | changed: [testbed-manager] 2026-03-07 00:13:46.225358 | orchestrator | 2026-03-07 00:13:46.225366 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-07 00:13:46.972781 | orchestrator | changed: [testbed-manager] 2026-03-07 00:13:46.972873 | orchestrator | 2026-03-07 00:13:46.972888 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-07 00:13:48.337304 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-03-07 00:13:48.337384 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-03-07 00:13:48.337395 | orchestrator | 2026-03-07 00:13:48.337420 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-07 00:13:49.735805 | orchestrator | changed: [testbed-manager] 2026-03-07 00:13:49.736018 | orchestrator | 2026-03-07 00:13:49.736028 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-07 00:13:51.453155 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-03-07 00:13:51.453262 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-03-07 00:13:51.453277 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-03-07 00:13:51.453289 | orchestrator | 2026-03-07 00:13:51.453302 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-07 00:13:51.516454 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:13:51.516516 | orchestrator | 2026-03-07 00:13:51.516524 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-07 00:13:51.593621 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:13:51.593707 | orchestrator | 2026-03-07 00:13:51.593725 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-07 00:13:52.175814 | orchestrator | changed: [testbed-manager] 2026-03-07 00:13:52.175904 | orchestrator | 2026-03-07 00:13:52.175954 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-07 00:13:52.249983 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:13:52.250059 | orchestrator | 2026-03-07 00:13:52.250067 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-07 00:13:53.141397 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-07 00:13:53.141514 | orchestrator | changed: [testbed-manager] 2026-03-07 00:13:53.141542 | orchestrator | 2026-03-07 00:13:53.141563 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-07 00:13:53.184390 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:13:53.184430 | orchestrator | 2026-03-07 00:13:53.184437 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-07 00:13:53.226770 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:13:53.226814 | orchestrator | 2026-03-07 00:13:53.226824 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-07 00:13:53.271265 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:13:53.271305 | orchestrator | 2026-03-07 00:13:53.271315 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-07 00:13:53.340044 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:13:53.340090 | orchestrator | 2026-03-07 00:13:53.340099 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-07 00:13:54.057195 | orchestrator | ok: [testbed-manager] 2026-03-07 00:13:54.057295 | orchestrator | 2026-03-07 00:13:54.057312 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-07 00:13:54.057325 | orchestrator | 2026-03-07 00:13:54.057336 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-07 00:13:55.429328 | orchestrator | ok: [testbed-manager] 2026-03-07 00:13:55.429450 | orchestrator | 2026-03-07 00:13:55.429466 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-03-07 00:13:56.378607 | orchestrator | changed: [testbed-manager] 2026-03-07 00:13:56.378649 | orchestrator | 2026-03-07 00:13:56.378655 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:13:56.378662 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-03-07 00:13:56.378666 | orchestrator | 2026-03-07 00:13:56.725836 | orchestrator | ok: Runtime: 0:09:19.850723 2026-03-07 00:13:56.743677 | 2026-03-07 00:13:56.743821 | TASK [Point out that the log in on the manager is now possible] 2026-03-07 00:13:56.792652 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-03-07 00:13:56.824510 | 2026-03-07 00:13:56.824647 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-07 00:13:56.868839 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-03-07 00:13:56.875895 | 2026-03-07 00:13:56.876012 | TASK [Run manager part 1 + 2] 2026-03-07 00:13:57.756554 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-07 00:13:57.815662 | orchestrator | 2026-03-07 00:13:57.815712 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-03-07 00:13:57.815719 | orchestrator | 2026-03-07 00:13:57.815733 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-07 00:14:00.670436 | orchestrator | ok: [testbed-manager] 2026-03-07 00:14:00.670485 | orchestrator | 2026-03-07 00:14:00.670507 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-07 00:14:00.711201 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:14:00.711254 | orchestrator | 2026-03-07 00:14:00.711262 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-07 00:14:00.765129 | orchestrator | ok: [testbed-manager] 2026-03-07 00:14:00.765288 | orchestrator | 2026-03-07 00:14:00.765300 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-07 00:14:00.825676 | orchestrator | ok: [testbed-manager] 2026-03-07 00:14:00.825766 | orchestrator | 2026-03-07 00:14:00.825784 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-07 00:14:00.922300 | orchestrator | ok: [testbed-manager] 2026-03-07 00:14:00.922390 | orchestrator | 2026-03-07 00:14:00.922409 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-07 00:14:00.997797 | orchestrator | ok: [testbed-manager] 2026-03-07 00:14:00.997844 | orchestrator | 2026-03-07 00:14:00.997853 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-07 00:14:01.043793 | orchestrator | included: /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-03-07 00:14:01.043842 | orchestrator | 2026-03-07 00:14:01.043852 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-07 00:14:01.783587 | orchestrator | ok: [testbed-manager] 2026-03-07 00:14:01.783673 | orchestrator | 2026-03-07 00:14:01.783692 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-07 00:14:01.837268 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:14:01.837325 | orchestrator | 2026-03-07 00:14:01.837334 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-07 00:14:03.230915 | orchestrator | changed: [testbed-manager] 2026-03-07 00:14:03.230963 | orchestrator | 2026-03-07 00:14:03.230990 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-07 00:14:03.785653 | orchestrator | ok: [testbed-manager] 2026-03-07 00:14:03.785717 | orchestrator | 2026-03-07 00:14:03.785725 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-07 00:14:04.787967 | orchestrator | changed: [testbed-manager] 2026-03-07 00:14:04.788018 | orchestrator | 2026-03-07 00:14:04.788026 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-07 00:14:18.866267 | orchestrator | changed: [testbed-manager] 2026-03-07 00:14:18.866313 | orchestrator | 2026-03-07 00:14:18.866319 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-07 00:14:19.529341 | orchestrator | ok: [testbed-manager] 2026-03-07 00:14:19.529406 | orchestrator | 2026-03-07 00:14:19.529417 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-07 00:14:19.583812 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:14:19.583887 | orchestrator | 2026-03-07 00:14:19.583895 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-03-07 00:14:20.526340 | orchestrator | changed: [testbed-manager] 2026-03-07 00:14:20.526487 | orchestrator | 2026-03-07 00:14:20.526500 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-03-07 00:14:21.459391 | orchestrator | changed: [testbed-manager] 2026-03-07 00:14:21.459451 | orchestrator | 2026-03-07 00:14:21.459458 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-03-07 00:14:21.954922 | orchestrator | changed: [testbed-manager] 2026-03-07 00:14:21.954989 | orchestrator | 2026-03-07 00:14:21.955001 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-03-07 00:14:21.998456 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-07 00:14:21.998528 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-07 00:14:21.998535 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-07 00:14:21.998541 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-07 00:14:23.948178 | orchestrator | changed: [testbed-manager] 2026-03-07 00:14:23.948238 | orchestrator | 2026-03-07 00:14:23.948246 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-03-07 00:14:32.794182 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-03-07 00:14:32.794258 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-03-07 00:14:32.794269 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-03-07 00:14:32.794278 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-03-07 00:14:32.794292 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-03-07 00:14:32.794300 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-03-07 00:14:32.794307 | orchestrator | 2026-03-07 00:14:32.794316 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-03-07 00:14:33.868732 | orchestrator | changed: [testbed-manager] 2026-03-07 00:14:33.868841 | orchestrator | 2026-03-07 00:14:33.868857 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-03-07 00:14:33.904075 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:14:33.904114 | orchestrator | 2026-03-07 00:14:33.904120 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-03-07 00:14:36.964480 | orchestrator | changed: [testbed-manager] 2026-03-07 00:14:36.964519 | orchestrator | 2026-03-07 00:14:36.964526 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-03-07 00:14:37.005178 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:14:37.005220 | orchestrator | 2026-03-07 00:14:37.005229 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-03-07 00:16:11.015954 | orchestrator | changed: [testbed-manager] 2026-03-07 00:16:11.015991 | orchestrator | 2026-03-07 00:16:11.015998 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-07 00:16:12.120749 | orchestrator | ok: [testbed-manager] 2026-03-07 00:16:12.120839 | orchestrator | 2026-03-07 00:16:12.120858 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:16:12.120874 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-03-07 00:16:12.120889 | orchestrator | 2026-03-07 00:16:12.529035 | orchestrator | ok: Runtime: 0:02:15.015668 2026-03-07 00:16:12.547685 | 2026-03-07 00:16:12.547859 | TASK [Reboot manager] 2026-03-07 00:16:14.090821 | orchestrator | ok: Runtime: 0:00:00.956662 2026-03-07 00:16:14.105695 | 2026-03-07 00:16:14.105837 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-07 00:16:28.143649 | orchestrator | ok 2026-03-07 00:16:28.154408 | 2026-03-07 00:16:28.154581 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-07 00:17:28.192040 | orchestrator | ok 2026-03-07 00:17:28.199286 | 2026-03-07 00:17:28.199410 | TASK [Deploy manager + bootstrap nodes] 2026-03-07 00:17:30.668075 | orchestrator | 2026-03-07 00:17:30.668273 | orchestrator | # DEPLOY MANAGER 2026-03-07 00:17:30.668331 | orchestrator | 2026-03-07 00:17:30.668346 | orchestrator | + set -e 2026-03-07 00:17:30.668360 | orchestrator | + echo 2026-03-07 00:17:30.668375 | orchestrator | + echo '# DEPLOY MANAGER' 2026-03-07 00:17:30.668393 | orchestrator | + echo 2026-03-07 00:17:30.668445 | orchestrator | + cat /opt/manager-vars.sh 2026-03-07 00:17:30.671405 | orchestrator | export NUMBER_OF_NODES=6 2026-03-07 00:17:30.671446 | orchestrator | 2026-03-07 00:17:30.671458 | orchestrator | export CEPH_VERSION=reef 2026-03-07 00:17:30.671472 | orchestrator | export CONFIGURATION_VERSION=main 2026-03-07 00:17:30.671484 | orchestrator | export MANAGER_VERSION=9.5.0 2026-03-07 00:17:30.671508 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-03-07 00:17:30.671519 | orchestrator | 2026-03-07 00:17:30.671538 | orchestrator | export ARA=false 2026-03-07 00:17:30.671550 | orchestrator | export DEPLOY_MODE=manager 2026-03-07 00:17:30.671568 | orchestrator | export TEMPEST=true 2026-03-07 00:17:30.671580 | orchestrator | export IS_ZUUL=true 2026-03-07 00:17:30.671591 | orchestrator | 2026-03-07 00:17:30.671609 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.18 2026-03-07 00:17:30.671621 | orchestrator | export EXTERNAL_API=false 2026-03-07 00:17:30.671633 | orchestrator | 2026-03-07 00:17:30.671643 | orchestrator | export IMAGE_USER=ubuntu 2026-03-07 00:17:30.671658 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-03-07 00:17:30.671669 | orchestrator | 2026-03-07 00:17:30.671680 | orchestrator | export CEPH_STACK=ceph-ansible 2026-03-07 00:17:30.671699 | orchestrator | 2026-03-07 00:17:30.671711 | orchestrator | + echo 2026-03-07 00:17:30.671724 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-07 00:17:30.672212 | orchestrator | ++ export INTERACTIVE=false 2026-03-07 00:17:30.672230 | orchestrator | ++ INTERACTIVE=false 2026-03-07 00:17:30.672242 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-07 00:17:30.672254 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-07 00:17:30.672430 | orchestrator | + source /opt/manager-vars.sh 2026-03-07 00:17:30.672454 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-07 00:17:30.672481 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-07 00:17:30.672505 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-07 00:17:30.672523 | orchestrator | ++ CEPH_VERSION=reef 2026-03-07 00:17:30.672541 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-07 00:17:30.672558 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-07 00:17:30.672576 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-07 00:17:30.672594 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-07 00:17:30.672611 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-07 00:17:30.672650 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-07 00:17:30.672668 | orchestrator | ++ export ARA=false 2026-03-07 00:17:30.672688 | orchestrator | ++ ARA=false 2026-03-07 00:17:30.672706 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-07 00:17:30.672725 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-07 00:17:30.672739 | orchestrator | ++ export TEMPEST=true 2026-03-07 00:17:30.672750 | orchestrator | ++ TEMPEST=true 2026-03-07 00:17:30.672761 | orchestrator | ++ export IS_ZUUL=true 2026-03-07 00:17:30.672772 | orchestrator | ++ IS_ZUUL=true 2026-03-07 00:17:30.672787 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.18 2026-03-07 00:17:30.672799 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.18 2026-03-07 00:17:30.672810 | orchestrator | ++ export EXTERNAL_API=false 2026-03-07 00:17:30.672821 | orchestrator | ++ EXTERNAL_API=false 2026-03-07 00:17:30.672832 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-07 00:17:30.672842 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-07 00:17:30.672853 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-07 00:17:30.672864 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-07 00:17:30.672875 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-07 00:17:30.672886 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-07 00:17:30.672897 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-03-07 00:17:30.723830 | orchestrator | + docker version 2026-03-07 00:17:30.852205 | orchestrator | Client: Docker Engine - Community 2026-03-07 00:17:30.852338 | orchestrator | Version: 27.5.1 2026-03-07 00:17:30.852356 | orchestrator | API version: 1.47 2026-03-07 00:17:30.852369 | orchestrator | Go version: go1.22.11 2026-03-07 00:17:30.852380 | orchestrator | Git commit: 9f9e405 2026-03-07 00:17:30.852392 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-07 00:17:30.852404 | orchestrator | OS/Arch: linux/amd64 2026-03-07 00:17:30.852415 | orchestrator | Context: default 2026-03-07 00:17:30.852426 | orchestrator | 2026-03-07 00:17:30.852438 | orchestrator | Server: Docker Engine - Community 2026-03-07 00:17:30.852449 | orchestrator | Engine: 2026-03-07 00:17:30.852464 | orchestrator | Version: 27.5.1 2026-03-07 00:17:30.852485 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-03-07 00:17:30.852541 | orchestrator | Go version: go1.22.11 2026-03-07 00:17:30.852560 | orchestrator | Git commit: 4c9b3b0 2026-03-07 00:17:30.852580 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-07 00:17:30.852595 | orchestrator | OS/Arch: linux/amd64 2026-03-07 00:17:30.852606 | orchestrator | Experimental: false 2026-03-07 00:17:30.852617 | orchestrator | containerd: 2026-03-07 00:17:30.852628 | orchestrator | Version: v2.2.1 2026-03-07 00:17:30.852640 | orchestrator | GitCommit: dea7da592f5d1d2b7755e3a161be07f43fad8f75 2026-03-07 00:17:30.852651 | orchestrator | runc: 2026-03-07 00:17:30.852662 | orchestrator | Version: 1.3.4 2026-03-07 00:17:30.852673 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-03-07 00:17:30.852684 | orchestrator | docker-init: 2026-03-07 00:17:30.852695 | orchestrator | Version: 0.19.0 2026-03-07 00:17:30.852707 | orchestrator | GitCommit: de40ad0 2026-03-07 00:17:30.855662 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-03-07 00:17:30.864973 | orchestrator | + set -e 2026-03-07 00:17:30.865098 | orchestrator | + source /opt/manager-vars.sh 2026-03-07 00:17:30.865129 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-07 00:17:30.865148 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-07 00:17:30.865166 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-07 00:17:30.865192 | orchestrator | ++ CEPH_VERSION=reef 2026-03-07 00:17:30.865209 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-07 00:17:30.865227 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-07 00:17:30.865246 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-07 00:17:30.865263 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-07 00:17:30.865309 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-07 00:17:30.865330 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-07 00:17:30.865360 | orchestrator | ++ export ARA=false 2026-03-07 00:17:30.865379 | orchestrator | ++ ARA=false 2026-03-07 00:17:30.865397 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-07 00:17:30.865416 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-07 00:17:30.865433 | orchestrator | ++ export TEMPEST=true 2026-03-07 00:17:30.865450 | orchestrator | ++ TEMPEST=true 2026-03-07 00:17:30.865468 | orchestrator | ++ export IS_ZUUL=true 2026-03-07 00:17:30.865487 | orchestrator | ++ IS_ZUUL=true 2026-03-07 00:17:30.865507 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.18 2026-03-07 00:17:30.865525 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.18 2026-03-07 00:17:30.865544 | orchestrator | ++ export EXTERNAL_API=false 2026-03-07 00:17:30.865563 | orchestrator | ++ EXTERNAL_API=false 2026-03-07 00:17:30.865581 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-07 00:17:30.865596 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-07 00:17:30.865607 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-07 00:17:30.865625 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-07 00:17:30.865637 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-07 00:17:30.865648 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-07 00:17:30.865659 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-07 00:17:30.865670 | orchestrator | ++ export INTERACTIVE=false 2026-03-07 00:17:30.865680 | orchestrator | ++ INTERACTIVE=false 2026-03-07 00:17:30.865691 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-07 00:17:30.865706 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-07 00:17:30.865724 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-03-07 00:17:30.865735 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.5.0 2026-03-07 00:17:30.872868 | orchestrator | + set -e 2026-03-07 00:17:30.873408 | orchestrator | + VERSION=9.5.0 2026-03-07 00:17:30.873439 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.5.0/g' /opt/configuration/environments/manager/configuration.yml 2026-03-07 00:17:30.879047 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-03-07 00:17:30.879079 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-03-07 00:17:30.882075 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-03-07 00:17:30.885111 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-03-07 00:17:30.891517 | orchestrator | /opt/configuration ~ 2026-03-07 00:17:30.891581 | orchestrator | + set -e 2026-03-07 00:17:30.891596 | orchestrator | + pushd /opt/configuration 2026-03-07 00:17:30.891609 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-07 00:17:30.892978 | orchestrator | + source /opt/venv/bin/activate 2026-03-07 00:17:30.894074 | orchestrator | ++ deactivate nondestructive 2026-03-07 00:17:30.894139 | orchestrator | ++ '[' -n '' ']' 2026-03-07 00:17:30.894156 | orchestrator | ++ '[' -n '' ']' 2026-03-07 00:17:30.894201 | orchestrator | ++ hash -r 2026-03-07 00:17:30.894213 | orchestrator | ++ '[' -n '' ']' 2026-03-07 00:17:30.894224 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-07 00:17:30.894235 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-07 00:17:30.894246 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-07 00:17:30.894257 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-07 00:17:30.894268 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-07 00:17:30.894320 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-07 00:17:30.894340 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-07 00:17:30.894360 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-07 00:17:30.894372 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-07 00:17:30.894384 | orchestrator | ++ export PATH 2026-03-07 00:17:30.894395 | orchestrator | ++ '[' -n '' ']' 2026-03-07 00:17:30.894406 | orchestrator | ++ '[' -z '' ']' 2026-03-07 00:17:30.894430 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-07 00:17:30.894441 | orchestrator | ++ PS1='(venv) ' 2026-03-07 00:17:30.894452 | orchestrator | ++ export PS1 2026-03-07 00:17:30.894463 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-07 00:17:30.894474 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-07 00:17:30.894485 | orchestrator | ++ hash -r 2026-03-07 00:17:30.894496 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-03-07 00:17:31.927363 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-03-07 00:17:31.928060 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-03-07 00:17:31.929501 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-03-07 00:17:31.930840 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-03-07 00:17:31.931799 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-03-07 00:17:31.941804 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-03-07 00:17:31.943073 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-03-07 00:17:31.944093 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-03-07 00:17:31.945408 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-03-07 00:17:31.979814 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.5) 2026-03-07 00:17:31.980842 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-03-07 00:17:31.982671 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-03-07 00:17:31.984042 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.2.25) 2026-03-07 00:17:31.987774 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-03-07 00:17:32.202967 | orchestrator | ++ which gilt 2026-03-07 00:17:32.206769 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-03-07 00:17:32.206870 | orchestrator | + /opt/venv/bin/gilt overlay 2026-03-07 00:17:32.421804 | orchestrator | osism.cfg-generics: 2026-03-07 00:17:32.573657 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-03-07 00:17:32.573798 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-03-07 00:17:32.573984 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-03-07 00:17:32.574819 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-03-07 00:17:33.125096 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-03-07 00:17:33.134996 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-03-07 00:17:33.448453 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-03-07 00:17:33.499045 | orchestrator | ~ 2026-03-07 00:17:33.499172 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-07 00:17:33.499189 | orchestrator | + deactivate 2026-03-07 00:17:33.499202 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-07 00:17:33.499215 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-07 00:17:33.499226 | orchestrator | + export PATH 2026-03-07 00:17:33.499237 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-07 00:17:33.499249 | orchestrator | + '[' -n '' ']' 2026-03-07 00:17:33.499263 | orchestrator | + hash -r 2026-03-07 00:17:33.499292 | orchestrator | + '[' -n '' ']' 2026-03-07 00:17:33.499303 | orchestrator | + unset VIRTUAL_ENV 2026-03-07 00:17:33.499316 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-07 00:17:33.499335 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-07 00:17:33.499353 | orchestrator | + unset -f deactivate 2026-03-07 00:17:33.499380 | orchestrator | + popd 2026-03-07 00:17:33.500744 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-03-07 00:17:33.500788 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-03-07 00:17:33.501409 | orchestrator | ++ semver 9.5.0 7.0.0 2026-03-07 00:17:33.562368 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-07 00:17:33.562475 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-03-07 00:17:33.563404 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-03-07 00:17:33.628549 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-07 00:17:33.628669 | orchestrator | ++ semver 2024.2 2025.1 2026-03-07 00:17:33.689648 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-07 00:17:33.689780 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-03-07 00:17:33.781803 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-07 00:17:33.781888 | orchestrator | + source /opt/venv/bin/activate 2026-03-07 00:17:33.781905 | orchestrator | ++ deactivate nondestructive 2026-03-07 00:17:33.781913 | orchestrator | ++ '[' -n '' ']' 2026-03-07 00:17:33.782035 | orchestrator | ++ '[' -n '' ']' 2026-03-07 00:17:33.784152 | orchestrator | ++ hash -r 2026-03-07 00:17:33.784197 | orchestrator | ++ '[' -n '' ']' 2026-03-07 00:17:33.784212 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-07 00:17:33.784224 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-07 00:17:33.784236 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-07 00:17:33.784252 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-07 00:17:33.784306 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-07 00:17:33.784325 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-07 00:17:33.784335 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-07 00:17:33.784349 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-07 00:17:33.784383 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-07 00:17:33.784395 | orchestrator | ++ export PATH 2026-03-07 00:17:33.784406 | orchestrator | ++ '[' -n '' ']' 2026-03-07 00:17:33.784417 | orchestrator | ++ '[' -z '' ']' 2026-03-07 00:17:33.784429 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-07 00:17:33.784446 | orchestrator | ++ PS1='(venv) ' 2026-03-07 00:17:33.784455 | orchestrator | ++ export PS1 2026-03-07 00:17:33.784465 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-07 00:17:33.784475 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-07 00:17:33.784486 | orchestrator | ++ hash -r 2026-03-07 00:17:33.784497 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-03-07 00:17:34.762440 | orchestrator | 2026-03-07 00:17:34.762544 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-03-07 00:17:34.762558 | orchestrator | 2026-03-07 00:17:34.762569 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-07 00:17:35.329882 | orchestrator | ok: [testbed-manager] 2026-03-07 00:17:35.330063 | orchestrator | 2026-03-07 00:17:35.330098 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-07 00:17:36.331591 | orchestrator | changed: [testbed-manager] 2026-03-07 00:17:36.331700 | orchestrator | 2026-03-07 00:17:36.331717 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-03-07 00:17:36.331756 | orchestrator | 2026-03-07 00:17:36.331768 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-07 00:17:38.554667 | orchestrator | ok: [testbed-manager] 2026-03-07 00:17:38.554752 | orchestrator | 2026-03-07 00:17:38.554763 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-03-07 00:17:38.607018 | orchestrator | ok: [testbed-manager] 2026-03-07 00:17:38.607132 | orchestrator | 2026-03-07 00:17:38.607157 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-03-07 00:17:39.060924 | orchestrator | changed: [testbed-manager] 2026-03-07 00:17:39.061026 | orchestrator | 2026-03-07 00:17:39.061045 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-03-07 00:17:39.095649 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:17:39.095745 | orchestrator | 2026-03-07 00:17:39.095760 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-07 00:17:39.446925 | orchestrator | changed: [testbed-manager] 2026-03-07 00:17:39.447024 | orchestrator | 2026-03-07 00:17:39.447039 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-03-07 00:17:39.792741 | orchestrator | ok: [testbed-manager] 2026-03-07 00:17:39.792855 | orchestrator | 2026-03-07 00:17:39.792874 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-03-07 00:17:39.912010 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:17:39.912107 | orchestrator | 2026-03-07 00:17:39.912122 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-03-07 00:17:39.912134 | orchestrator | 2026-03-07 00:17:39.912146 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-07 00:17:41.661768 | orchestrator | ok: [testbed-manager] 2026-03-07 00:17:41.661878 | orchestrator | 2026-03-07 00:17:41.661893 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-03-07 00:17:41.754204 | orchestrator | included: osism.services.traefik for testbed-manager 2026-03-07 00:17:41.754339 | orchestrator | 2026-03-07 00:17:41.754355 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-03-07 00:17:41.813941 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-03-07 00:17:41.814084 | orchestrator | 2026-03-07 00:17:41.814100 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-03-07 00:17:42.903704 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-03-07 00:17:42.903799 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-03-07 00:17:42.903810 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-03-07 00:17:42.903820 | orchestrator | 2026-03-07 00:17:42.903831 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-03-07 00:17:44.722071 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-03-07 00:17:44.722188 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-03-07 00:17:44.722205 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-03-07 00:17:44.722219 | orchestrator | 2026-03-07 00:17:44.722231 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-03-07 00:17:45.352096 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-07 00:17:45.352201 | orchestrator | changed: [testbed-manager] 2026-03-07 00:17:45.352217 | orchestrator | 2026-03-07 00:17:45.352230 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-03-07 00:17:46.008670 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-07 00:17:46.008778 | orchestrator | changed: [testbed-manager] 2026-03-07 00:17:46.008796 | orchestrator | 2026-03-07 00:17:46.008808 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-03-07 00:17:46.071458 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:17:46.071567 | orchestrator | 2026-03-07 00:17:46.071593 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-03-07 00:17:46.423721 | orchestrator | ok: [testbed-manager] 2026-03-07 00:17:46.423823 | orchestrator | 2026-03-07 00:17:46.423839 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-03-07 00:17:46.500744 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-03-07 00:17:46.500868 | orchestrator | 2026-03-07 00:17:46.500884 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-03-07 00:17:47.529884 | orchestrator | changed: [testbed-manager] 2026-03-07 00:17:47.529990 | orchestrator | 2026-03-07 00:17:47.530007 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-03-07 00:17:48.254717 | orchestrator | changed: [testbed-manager] 2026-03-07 00:17:48.254822 | orchestrator | 2026-03-07 00:17:48.254839 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-03-07 00:17:57.833487 | orchestrator | changed: [testbed-manager] 2026-03-07 00:17:57.833620 | orchestrator | 2026-03-07 00:17:57.833647 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-03-07 00:17:57.873863 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:17:57.873948 | orchestrator | 2026-03-07 00:17:57.873983 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-03-07 00:17:57.873994 | orchestrator | 2026-03-07 00:17:57.874004 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-07 00:17:59.504987 | orchestrator | ok: [testbed-manager] 2026-03-07 00:17:59.505106 | orchestrator | 2026-03-07 00:17:59.505124 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-03-07 00:17:59.600962 | orchestrator | included: osism.services.manager for testbed-manager 2026-03-07 00:17:59.601061 | orchestrator | 2026-03-07 00:17:59.601076 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-03-07 00:17:59.652611 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-03-07 00:17:59.652704 | orchestrator | 2026-03-07 00:17:59.652718 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-03-07 00:18:01.550324 | orchestrator | ok: [testbed-manager] 2026-03-07 00:18:01.550449 | orchestrator | 2026-03-07 00:18:01.550468 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-03-07 00:18:01.599626 | orchestrator | ok: [testbed-manager] 2026-03-07 00:18:01.599731 | orchestrator | 2026-03-07 00:18:01.599748 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-03-07 00:18:01.705976 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-03-07 00:18:01.706134 | orchestrator | 2026-03-07 00:18:01.706150 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-03-07 00:18:04.235661 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-03-07 00:18:04.235771 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-03-07 00:18:04.235786 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-03-07 00:18:04.235799 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-03-07 00:18:04.235814 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-03-07 00:18:04.235834 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-03-07 00:18:04.235853 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-03-07 00:18:04.235871 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-03-07 00:18:04.235891 | orchestrator | 2026-03-07 00:18:04.235911 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-03-07 00:18:04.842254 | orchestrator | changed: [testbed-manager] 2026-03-07 00:18:04.842390 | orchestrator | 2026-03-07 00:18:04.842421 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-03-07 00:18:05.463568 | orchestrator | changed: [testbed-manager] 2026-03-07 00:18:05.463667 | orchestrator | 2026-03-07 00:18:05.463683 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-03-07 00:18:05.544600 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-03-07 00:18:05.544695 | orchestrator | 2026-03-07 00:18:05.544710 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-03-07 00:18:06.680707 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-03-07 00:18:06.680818 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-03-07 00:18:06.680833 | orchestrator | 2026-03-07 00:18:06.680846 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-03-07 00:18:07.293503 | orchestrator | changed: [testbed-manager] 2026-03-07 00:18:07.293632 | orchestrator | 2026-03-07 00:18:07.293651 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-03-07 00:18:07.349766 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:18:07.349864 | orchestrator | 2026-03-07 00:18:07.349879 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-03-07 00:18:07.420822 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-03-07 00:18:07.420915 | orchestrator | 2026-03-07 00:18:07.420929 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-03-07 00:18:08.019681 | orchestrator | changed: [testbed-manager] 2026-03-07 00:18:08.019789 | orchestrator | 2026-03-07 00:18:08.019807 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-03-07 00:18:08.087422 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-03-07 00:18:08.087516 | orchestrator | 2026-03-07 00:18:08.087531 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-03-07 00:18:09.408892 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-07 00:18:09.409032 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-07 00:18:09.409050 | orchestrator | changed: [testbed-manager] 2026-03-07 00:18:09.409064 | orchestrator | 2026-03-07 00:18:09.409077 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-03-07 00:18:10.025055 | orchestrator | changed: [testbed-manager] 2026-03-07 00:18:10.025150 | orchestrator | 2026-03-07 00:18:10.025161 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-03-07 00:18:10.078699 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:18:10.078771 | orchestrator | 2026-03-07 00:18:10.078778 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-03-07 00:18:10.170365 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-03-07 00:18:10.170462 | orchestrator | 2026-03-07 00:18:10.170476 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-03-07 00:18:10.661338 | orchestrator | changed: [testbed-manager] 2026-03-07 00:18:10.661444 | orchestrator | 2026-03-07 00:18:10.661463 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-03-07 00:18:11.047720 | orchestrator | changed: [testbed-manager] 2026-03-07 00:18:11.047822 | orchestrator | 2026-03-07 00:18:11.047839 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-03-07 00:18:12.226572 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-03-07 00:18:12.226672 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-03-07 00:18:12.226687 | orchestrator | 2026-03-07 00:18:12.226700 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-03-07 00:18:12.840836 | orchestrator | changed: [testbed-manager] 2026-03-07 00:18:12.840937 | orchestrator | 2026-03-07 00:18:12.840954 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-03-07 00:18:13.176837 | orchestrator | ok: [testbed-manager] 2026-03-07 00:18:13.176968 | orchestrator | 2026-03-07 00:18:13.176996 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-03-07 00:18:13.512336 | orchestrator | changed: [testbed-manager] 2026-03-07 00:18:13.512453 | orchestrator | 2026-03-07 00:18:13.512480 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-03-07 00:18:13.559527 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:18:13.559618 | orchestrator | 2026-03-07 00:18:13.559632 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-03-07 00:18:13.626771 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-03-07 00:18:13.626904 | orchestrator | 2026-03-07 00:18:13.626922 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-03-07 00:18:13.669754 | orchestrator | ok: [testbed-manager] 2026-03-07 00:18:13.669853 | orchestrator | 2026-03-07 00:18:13.669869 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-03-07 00:18:15.566629 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-03-07 00:18:15.566740 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-03-07 00:18:15.566756 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-03-07 00:18:15.566768 | orchestrator | 2026-03-07 00:18:15.566780 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-03-07 00:18:16.176392 | orchestrator | changed: [testbed-manager] 2026-03-07 00:18:16.176495 | orchestrator | 2026-03-07 00:18:16.176512 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-03-07 00:18:16.807337 | orchestrator | changed: [testbed-manager] 2026-03-07 00:18:16.807447 | orchestrator | 2026-03-07 00:18:16.807464 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-03-07 00:18:17.465603 | orchestrator | changed: [testbed-manager] 2026-03-07 00:18:17.465706 | orchestrator | 2026-03-07 00:18:17.465725 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-03-07 00:18:17.539263 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-03-07 00:18:17.539359 | orchestrator | 2026-03-07 00:18:17.539377 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-03-07 00:18:17.589130 | orchestrator | ok: [testbed-manager] 2026-03-07 00:18:17.589252 | orchestrator | 2026-03-07 00:18:17.589267 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-03-07 00:18:18.219522 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-03-07 00:18:18.219618 | orchestrator | 2026-03-07 00:18:18.219633 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-03-07 00:18:18.296144 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-03-07 00:18:18.296273 | orchestrator | 2026-03-07 00:18:18.296289 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-03-07 00:18:18.926159 | orchestrator | changed: [testbed-manager] 2026-03-07 00:18:18.926296 | orchestrator | 2026-03-07 00:18:18.926312 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-03-07 00:18:19.459703 | orchestrator | ok: [testbed-manager] 2026-03-07 00:18:19.459808 | orchestrator | 2026-03-07 00:18:19.459825 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-03-07 00:18:19.508620 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:18:19.508709 | orchestrator | 2026-03-07 00:18:19.508722 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-03-07 00:18:19.567741 | orchestrator | ok: [testbed-manager] 2026-03-07 00:18:19.567835 | orchestrator | 2026-03-07 00:18:19.567849 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-03-07 00:18:20.273535 | orchestrator | changed: [testbed-manager] 2026-03-07 00:18:20.273667 | orchestrator | 2026-03-07 00:18:20.273688 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-03-07 00:19:23.249132 | orchestrator | changed: [testbed-manager] 2026-03-07 00:19:23.249274 | orchestrator | 2026-03-07 00:19:23.249304 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-03-07 00:19:24.191408 | orchestrator | ok: [testbed-manager] 2026-03-07 00:19:24.191519 | orchestrator | 2026-03-07 00:19:24.191537 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-03-07 00:19:24.248222 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:19:24.248302 | orchestrator | 2026-03-07 00:19:24.248311 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-03-07 00:19:27.482713 | orchestrator | changed: [testbed-manager] 2026-03-07 00:19:27.482871 | orchestrator | 2026-03-07 00:19:27.482901 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-03-07 00:19:27.527254 | orchestrator | ok: [testbed-manager] 2026-03-07 00:19:27.527328 | orchestrator | 2026-03-07 00:19:27.527336 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-07 00:19:27.527341 | orchestrator | 2026-03-07 00:19:27.527346 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-03-07 00:19:27.669108 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:19:27.669187 | orchestrator | 2026-03-07 00:19:27.669197 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-03-07 00:20:27.727724 | orchestrator | Pausing for 60 seconds 2026-03-07 00:20:27.727808 | orchestrator | changed: [testbed-manager] 2026-03-07 00:20:27.727818 | orchestrator | 2026-03-07 00:20:27.727825 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-03-07 00:20:30.231137 | orchestrator | changed: [testbed-manager] 2026-03-07 00:20:30.231248 | orchestrator | 2026-03-07 00:20:30.231266 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-03-07 00:21:11.639349 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-03-07 00:21:11.639446 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-03-07 00:21:11.639456 | orchestrator | changed: [testbed-manager] 2026-03-07 00:21:11.639465 | orchestrator | 2026-03-07 00:21:11.639489 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-03-07 00:21:21.393050 | orchestrator | changed: [testbed-manager] 2026-03-07 00:21:21.393186 | orchestrator | 2026-03-07 00:21:21.393216 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-03-07 00:21:21.471839 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-03-07 00:21:21.471927 | orchestrator | 2026-03-07 00:21:21.471939 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-07 00:21:21.471949 | orchestrator | 2026-03-07 00:21:21.471958 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-03-07 00:21:21.518536 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:21:21.518656 | orchestrator | 2026-03-07 00:21:21.518678 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-03-07 00:21:21.586570 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-03-07 00:21:21.586689 | orchestrator | 2026-03-07 00:21:21.586708 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-03-07 00:21:22.323346 | orchestrator | changed: [testbed-manager] 2026-03-07 00:21:22.323450 | orchestrator | 2026-03-07 00:21:22.323466 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-03-07 00:21:25.418244 | orchestrator | ok: [testbed-manager] 2026-03-07 00:21:25.418338 | orchestrator | 2026-03-07 00:21:25.418352 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-03-07 00:21:25.491687 | orchestrator | ok: [testbed-manager] => { 2026-03-07 00:21:25.491787 | orchestrator | "version_check_result.stdout_lines": [ 2026-03-07 00:21:25.491852 | orchestrator | "=== OSISM Container Version Check ===", 2026-03-07 00:21:25.491864 | orchestrator | "Checking running containers against expected versions...", 2026-03-07 00:21:25.491876 | orchestrator | "", 2026-03-07 00:21:25.491886 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-03-07 00:21:25.491897 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-03-07 00:21:25.491909 | orchestrator | " Enabled: true", 2026-03-07 00:21:25.491919 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-03-07 00:21:25.491929 | orchestrator | " Status: ✅ MATCH", 2026-03-07 00:21:25.491946 | orchestrator | "", 2026-03-07 00:21:25.491962 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-03-07 00:21:25.491979 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-03-07 00:21:25.491996 | orchestrator | " Enabled: true", 2026-03-07 00:21:25.492043 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-03-07 00:21:25.492062 | orchestrator | " Status: ✅ MATCH", 2026-03-07 00:21:25.492078 | orchestrator | "", 2026-03-07 00:21:25.492094 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-03-07 00:21:25.492111 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-03-07 00:21:25.492127 | orchestrator | " Enabled: true", 2026-03-07 00:21:25.492144 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-03-07 00:21:25.492159 | orchestrator | " Status: ✅ MATCH", 2026-03-07 00:21:25.492176 | orchestrator | "", 2026-03-07 00:21:25.492192 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-03-07 00:21:25.492211 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-03-07 00:21:25.492227 | orchestrator | " Enabled: true", 2026-03-07 00:21:25.492245 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-03-07 00:21:25.492260 | orchestrator | " Status: ✅ MATCH", 2026-03-07 00:21:25.492272 | orchestrator | "", 2026-03-07 00:21:25.492283 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-03-07 00:21:25.492298 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-03-07 00:21:25.492309 | orchestrator | " Enabled: true", 2026-03-07 00:21:25.492320 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-03-07 00:21:25.492330 | orchestrator | " Status: ✅ MATCH", 2026-03-07 00:21:25.492342 | orchestrator | "", 2026-03-07 00:21:25.492353 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-03-07 00:21:25.492365 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-07 00:21:25.492377 | orchestrator | " Enabled: true", 2026-03-07 00:21:25.492389 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-07 00:21:25.492399 | orchestrator | " Status: ✅ MATCH", 2026-03-07 00:21:25.492410 | orchestrator | "", 2026-03-07 00:21:25.492422 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-03-07 00:21:25.492433 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-07 00:21:25.492444 | orchestrator | " Enabled: true", 2026-03-07 00:21:25.492456 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-07 00:21:25.492468 | orchestrator | " Status: ✅ MATCH", 2026-03-07 00:21:25.492479 | orchestrator | "", 2026-03-07 00:21:25.492491 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-03-07 00:21:25.492502 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-07 00:21:25.492513 | orchestrator | " Enabled: true", 2026-03-07 00:21:25.492525 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-07 00:21:25.492536 | orchestrator | " Status: ✅ MATCH", 2026-03-07 00:21:25.492547 | orchestrator | "", 2026-03-07 00:21:25.492558 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-03-07 00:21:25.492570 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-03-07 00:21:25.492581 | orchestrator | " Enabled: true", 2026-03-07 00:21:25.492592 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-03-07 00:21:25.492603 | orchestrator | " Status: ✅ MATCH", 2026-03-07 00:21:25.492614 | orchestrator | "", 2026-03-07 00:21:25.492626 | orchestrator | "Checking service: redis (Redis Cache)", 2026-03-07 00:21:25.492635 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-07 00:21:25.492645 | orchestrator | " Enabled: true", 2026-03-07 00:21:25.492655 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-07 00:21:25.492664 | orchestrator | " Status: ✅ MATCH", 2026-03-07 00:21:25.492674 | orchestrator | "", 2026-03-07 00:21:25.492683 | orchestrator | "Checking service: api (OSISM API Service)", 2026-03-07 00:21:25.492693 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-07 00:21:25.492702 | orchestrator | " Enabled: true", 2026-03-07 00:21:25.492721 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-07 00:21:25.492731 | orchestrator | " Status: ✅ MATCH", 2026-03-07 00:21:25.492741 | orchestrator | "", 2026-03-07 00:21:25.492750 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-03-07 00:21:25.492760 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-07 00:21:25.492769 | orchestrator | " Enabled: true", 2026-03-07 00:21:25.492779 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-07 00:21:25.492788 | orchestrator | " Status: ✅ MATCH", 2026-03-07 00:21:25.492817 | orchestrator | "", 2026-03-07 00:21:25.492827 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-03-07 00:21:25.492837 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-07 00:21:25.492847 | orchestrator | " Enabled: true", 2026-03-07 00:21:25.492856 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-07 00:21:25.492866 | orchestrator | " Status: ✅ MATCH", 2026-03-07 00:21:25.492875 | orchestrator | "", 2026-03-07 00:21:25.492885 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-03-07 00:21:25.492895 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-07 00:21:25.492904 | orchestrator | " Enabled: true", 2026-03-07 00:21:25.492914 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-07 00:21:25.492944 | orchestrator | " Status: ✅ MATCH", 2026-03-07 00:21:25.492954 | orchestrator | "", 2026-03-07 00:21:25.492964 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-03-07 00:21:25.492974 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-07 00:21:25.492983 | orchestrator | " Enabled: true", 2026-03-07 00:21:25.493004 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-07 00:21:25.493014 | orchestrator | " Status: ✅ MATCH", 2026-03-07 00:21:25.493024 | orchestrator | "", 2026-03-07 00:21:25.493033 | orchestrator | "=== Summary ===", 2026-03-07 00:21:25.493043 | orchestrator | "Errors (version mismatches): 0", 2026-03-07 00:21:25.493053 | orchestrator | "Warnings (expected containers not running): 0", 2026-03-07 00:21:25.493063 | orchestrator | "", 2026-03-07 00:21:25.493072 | orchestrator | "✅ All running containers match expected versions!" 2026-03-07 00:21:25.493082 | orchestrator | ] 2026-03-07 00:21:25.493092 | orchestrator | } 2026-03-07 00:21:25.493102 | orchestrator | 2026-03-07 00:21:25.493112 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-03-07 00:21:25.553422 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:21:25.553547 | orchestrator | 2026-03-07 00:21:25.553566 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:21:25.553580 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-03-07 00:21:25.553592 | orchestrator | 2026-03-07 00:21:25.647209 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-07 00:21:25.647307 | orchestrator | + deactivate 2026-03-07 00:21:25.647323 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-07 00:21:25.647337 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-07 00:21:25.647348 | orchestrator | + export PATH 2026-03-07 00:21:25.647868 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-07 00:21:25.647894 | orchestrator | + '[' -n '' ']' 2026-03-07 00:21:25.647908 | orchestrator | + hash -r 2026-03-07 00:21:25.647921 | orchestrator | + '[' -n '' ']' 2026-03-07 00:21:25.647934 | orchestrator | + unset VIRTUAL_ENV 2026-03-07 00:21:25.647948 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-07 00:21:25.647961 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-07 00:21:25.647974 | orchestrator | + unset -f deactivate 2026-03-07 00:21:25.647988 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-03-07 00:21:25.653301 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-07 00:21:25.653336 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-07 00:21:25.653347 | orchestrator | + local max_attempts=60 2026-03-07 00:21:25.653359 | orchestrator | + local name=ceph-ansible 2026-03-07 00:21:25.653400 | orchestrator | + local attempt_num=1 2026-03-07 00:21:25.654399 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-07 00:21:25.692054 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-07 00:21:25.692123 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-07 00:21:25.692136 | orchestrator | + local max_attempts=60 2026-03-07 00:21:25.692147 | orchestrator | + local name=kolla-ansible 2026-03-07 00:21:25.692158 | orchestrator | + local attempt_num=1 2026-03-07 00:21:25.692910 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-07 00:21:25.725998 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-07 00:21:25.726145 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-07 00:21:25.726206 | orchestrator | + local max_attempts=60 2026-03-07 00:21:25.726228 | orchestrator | + local name=osism-ansible 2026-03-07 00:21:25.726244 | orchestrator | + local attempt_num=1 2026-03-07 00:21:25.726358 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-07 00:21:25.755129 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-07 00:21:25.755172 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-07 00:21:25.755185 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-07 00:21:26.392690 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-03-07 00:21:26.575070 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-03-07 00:21:26.575167 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2026-03-07 00:21:26.575181 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2026-03-07 00:21:26.575192 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2026-03-07 00:21:26.575205 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2026-03-07 00:21:26.575247 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2026-03-07 00:21:26.575259 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2026-03-07 00:21:26.575270 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 56 seconds (healthy) 2026-03-07 00:21:26.575281 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2026-03-07 00:21:26.575292 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2026-03-07 00:21:26.575303 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2026-03-07 00:21:26.575314 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2026-03-07 00:21:26.575324 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2026-03-07 00:21:26.575358 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" frontend About a minute ago Up About a minute 192.168.16.5:3000->3000/tcp 2026-03-07 00:21:26.575370 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2026-03-07 00:21:26.575381 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2026-03-07 00:21:26.580280 | orchestrator | ++ semver 9.5.0 7.0.0 2026-03-07 00:21:26.623882 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-07 00:21:26.623954 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-03-07 00:21:26.626734 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-03-07 00:21:38.939714 | orchestrator | 2026-03-07 00:21:38 | INFO  | Task bd5cff25-c147-4e62-9a3a-339de0acda78 (resolvconf) was prepared for execution. 2026-03-07 00:21:38.939889 | orchestrator | 2026-03-07 00:21:38 | INFO  | It takes a moment until task bd5cff25-c147-4e62-9a3a-339de0acda78 (resolvconf) has been started and output is visible here. 2026-03-07 00:21:52.353001 | orchestrator | 2026-03-07 00:21:52.353115 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-03-07 00:21:52.353133 | orchestrator | 2026-03-07 00:21:52.353145 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-07 00:21:52.353157 | orchestrator | Saturday 07 March 2026 00:21:42 +0000 (0:00:00.136) 0:00:00.136 ******** 2026-03-07 00:21:52.353169 | orchestrator | ok: [testbed-manager] 2026-03-07 00:21:52.353180 | orchestrator | 2026-03-07 00:21:52.353192 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-07 00:21:52.353203 | orchestrator | Saturday 07 March 2026 00:21:46 +0000 (0:00:03.551) 0:00:03.688 ******** 2026-03-07 00:21:52.353215 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:21:52.353227 | orchestrator | 2026-03-07 00:21:52.353238 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-07 00:21:52.353249 | orchestrator | Saturday 07 March 2026 00:21:46 +0000 (0:00:00.058) 0:00:03.747 ******** 2026-03-07 00:21:52.353261 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-03-07 00:21:52.353273 | orchestrator | 2026-03-07 00:21:52.353284 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-07 00:21:52.353295 | orchestrator | Saturday 07 March 2026 00:21:46 +0000 (0:00:00.094) 0:00:03.841 ******** 2026-03-07 00:21:52.353325 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-03-07 00:21:52.353337 | orchestrator | 2026-03-07 00:21:52.353349 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-07 00:21:52.353360 | orchestrator | Saturday 07 March 2026 00:21:46 +0000 (0:00:00.079) 0:00:03.921 ******** 2026-03-07 00:21:52.353370 | orchestrator | ok: [testbed-manager] 2026-03-07 00:21:52.353381 | orchestrator | 2026-03-07 00:21:52.353392 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-07 00:21:52.353403 | orchestrator | Saturday 07 March 2026 00:21:47 +0000 (0:00:01.052) 0:00:04.973 ******** 2026-03-07 00:21:52.353414 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:21:52.353425 | orchestrator | 2026-03-07 00:21:52.353436 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-07 00:21:52.353447 | orchestrator | Saturday 07 March 2026 00:21:47 +0000 (0:00:00.050) 0:00:05.023 ******** 2026-03-07 00:21:52.353457 | orchestrator | ok: [testbed-manager] 2026-03-07 00:21:52.353493 | orchestrator | 2026-03-07 00:21:52.353505 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-07 00:21:52.353516 | orchestrator | Saturday 07 March 2026 00:21:48 +0000 (0:00:00.493) 0:00:05.517 ******** 2026-03-07 00:21:52.353526 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:21:52.353539 | orchestrator | 2026-03-07 00:21:52.353552 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-07 00:21:52.353566 | orchestrator | Saturday 07 March 2026 00:21:48 +0000 (0:00:00.078) 0:00:05.595 ******** 2026-03-07 00:21:52.353579 | orchestrator | changed: [testbed-manager] 2026-03-07 00:21:52.353592 | orchestrator | 2026-03-07 00:21:52.353605 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-07 00:21:52.353616 | orchestrator | Saturday 07 March 2026 00:21:48 +0000 (0:00:00.521) 0:00:06.117 ******** 2026-03-07 00:21:52.353627 | orchestrator | changed: [testbed-manager] 2026-03-07 00:21:52.353638 | orchestrator | 2026-03-07 00:21:52.353648 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-07 00:21:52.353659 | orchestrator | Saturday 07 March 2026 00:21:49 +0000 (0:00:01.057) 0:00:07.174 ******** 2026-03-07 00:21:52.353670 | orchestrator | ok: [testbed-manager] 2026-03-07 00:21:52.353681 | orchestrator | 2026-03-07 00:21:52.353692 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-07 00:21:52.353703 | orchestrator | Saturday 07 March 2026 00:21:50 +0000 (0:00:00.932) 0:00:08.106 ******** 2026-03-07 00:21:52.353714 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-03-07 00:21:52.353725 | orchestrator | 2026-03-07 00:21:52.353736 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-07 00:21:52.353770 | orchestrator | Saturday 07 March 2026 00:21:50 +0000 (0:00:00.084) 0:00:08.191 ******** 2026-03-07 00:21:52.353781 | orchestrator | changed: [testbed-manager] 2026-03-07 00:21:52.353792 | orchestrator | 2026-03-07 00:21:52.353803 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:21:52.353815 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-07 00:21:52.353826 | orchestrator | 2026-03-07 00:21:52.353837 | orchestrator | 2026-03-07 00:21:52.353847 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:21:52.353858 | orchestrator | Saturday 07 March 2026 00:21:52 +0000 (0:00:01.140) 0:00:09.332 ******** 2026-03-07 00:21:52.353869 | orchestrator | =============================================================================== 2026-03-07 00:21:52.353880 | orchestrator | Gathering Facts --------------------------------------------------------- 3.55s 2026-03-07 00:21:52.353891 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.14s 2026-03-07 00:21:52.353901 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.06s 2026-03-07 00:21:52.353913 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.05s 2026-03-07 00:21:52.353923 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.93s 2026-03-07 00:21:52.353935 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.52s 2026-03-07 00:21:52.353963 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.49s 2026-03-07 00:21:52.353975 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2026-03-07 00:21:52.353986 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2026-03-07 00:21:52.353997 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.08s 2026-03-07 00:21:52.354008 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2026-03-07 00:21:52.354077 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2026-03-07 00:21:52.354099 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.05s 2026-03-07 00:21:52.620531 | orchestrator | + osism apply sshconfig 2026-03-07 00:22:04.592638 | orchestrator | 2026-03-07 00:22:04 | INFO  | Task 98f2ee4c-1830-432b-b8bc-7347a00c69b7 (sshconfig) was prepared for execution. 2026-03-07 00:22:04.592794 | orchestrator | 2026-03-07 00:22:04 | INFO  | It takes a moment until task 98f2ee4c-1830-432b-b8bc-7347a00c69b7 (sshconfig) has been started and output is visible here. 2026-03-07 00:22:15.114858 | orchestrator | 2026-03-07 00:22:15.114982 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-03-07 00:22:15.114998 | orchestrator | 2026-03-07 00:22:15.115031 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-03-07 00:22:15.115044 | orchestrator | Saturday 07 March 2026 00:22:08 +0000 (0:00:00.115) 0:00:00.115 ******** 2026-03-07 00:22:15.115056 | orchestrator | ok: [testbed-manager] 2026-03-07 00:22:15.115067 | orchestrator | 2026-03-07 00:22:15.115079 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-03-07 00:22:15.115090 | orchestrator | Saturday 07 March 2026 00:22:09 +0000 (0:00:00.451) 0:00:00.567 ******** 2026-03-07 00:22:15.115101 | orchestrator | changed: [testbed-manager] 2026-03-07 00:22:15.115114 | orchestrator | 2026-03-07 00:22:15.115125 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-03-07 00:22:15.115136 | orchestrator | Saturday 07 March 2026 00:22:09 +0000 (0:00:00.440) 0:00:01.008 ******** 2026-03-07 00:22:15.115147 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-03-07 00:22:15.115159 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-03-07 00:22:15.115170 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-03-07 00:22:15.115181 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-03-07 00:22:15.115192 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-03-07 00:22:15.115203 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-03-07 00:22:15.115214 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-03-07 00:22:15.115225 | orchestrator | 2026-03-07 00:22:15.115236 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-03-07 00:22:15.115247 | orchestrator | Saturday 07 March 2026 00:22:14 +0000 (0:00:04.876) 0:00:05.884 ******** 2026-03-07 00:22:15.115258 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:22:15.115269 | orchestrator | 2026-03-07 00:22:15.115280 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-03-07 00:22:15.115291 | orchestrator | Saturday 07 March 2026 00:22:14 +0000 (0:00:00.059) 0:00:05.944 ******** 2026-03-07 00:22:15.115302 | orchestrator | changed: [testbed-manager] 2026-03-07 00:22:15.115313 | orchestrator | 2026-03-07 00:22:15.115324 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:22:15.115339 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-07 00:22:15.115352 | orchestrator | 2026-03-07 00:22:15.115366 | orchestrator | 2026-03-07 00:22:15.115378 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:22:15.115391 | orchestrator | Saturday 07 March 2026 00:22:14 +0000 (0:00:00.506) 0:00:06.451 ******** 2026-03-07 00:22:15.115404 | orchestrator | =============================================================================== 2026-03-07 00:22:15.115417 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 4.88s 2026-03-07 00:22:15.115430 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.51s 2026-03-07 00:22:15.115442 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.45s 2026-03-07 00:22:15.115453 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.44s 2026-03-07 00:22:15.115464 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.06s 2026-03-07 00:22:15.364355 | orchestrator | + osism apply known-hosts 2026-03-07 00:22:27.403805 | orchestrator | 2026-03-07 00:22:27 | INFO  | Task 72616809-8e7d-41f4-a0ed-39a686f61be5 (known-hosts) was prepared for execution. 2026-03-07 00:22:27.403926 | orchestrator | 2026-03-07 00:22:27 | INFO  | It takes a moment until task 72616809-8e7d-41f4-a0ed-39a686f61be5 (known-hosts) has been started and output is visible here. 2026-03-07 00:22:43.702439 | orchestrator | 2026-03-07 00:22:43.702551 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-03-07 00:22:43.702568 | orchestrator | 2026-03-07 00:22:43.702580 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-03-07 00:22:43.702592 | orchestrator | Saturday 07 March 2026 00:22:31 +0000 (0:00:00.157) 0:00:00.157 ******** 2026-03-07 00:22:43.702604 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-07 00:22:43.702616 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-07 00:22:43.702627 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-07 00:22:43.702638 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-07 00:22:43.702649 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-07 00:22:43.702700 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-07 00:22:43.702711 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-07 00:22:43.702722 | orchestrator | 2026-03-07 00:22:43.702733 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-03-07 00:22:43.702745 | orchestrator | Saturday 07 March 2026 00:22:37 +0000 (0:00:05.716) 0:00:05.873 ******** 2026-03-07 00:22:43.702758 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-07 00:22:43.702771 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-07 00:22:43.702782 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-07 00:22:43.702793 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-07 00:22:43.702804 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-07 00:22:43.702825 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-07 00:22:43.702837 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-07 00:22:43.702848 | orchestrator | 2026-03-07 00:22:43.702859 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-07 00:22:43.702870 | orchestrator | Saturday 07 March 2026 00:22:37 +0000 (0:00:00.155) 0:00:06.028 ******** 2026-03-07 00:22:43.702889 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDHbnUnvA0U/2jmgkmEKrw7B69meA/HA7Vvn+Ac3glYp0Wfbg6lLnVLOjZP1UxZs5IwVfhPamN/NI3ehGrhaaLJlYRMtvqrEoGD/V0Rx5ZMgAmBbgHRi2xE3RnGFRI7dMgOcA0QIbPVsYRkVKfvEC3d4rwvVoO1WR7442e33Sz0tq4bQ+T3f6nt0INYMQQxrF4IhLcmRspG0oFP2mlGw8L7ZjPJ5hLYukz4VFAhyLFHhmIxD2Xkf2POdhIQ6UzX1QYe7Mk/zpuCZCKzk4HsbJXuCpxqxQExy/NI0aDIoWXzAw13hKukkrpQAhXOHsk02Xk3uvLuoVrx4nGVib+prjXhqditfLJwhYYPx0kaidOL++pGL0QuGPuPEuxPAbgRl0Tn75LRZ/u4L57aXmarDkrAESTXfP4OzvzixUVvLs4TAmEr8d5iSHc+fgA562sfQ7yOUcjbGq0el08Vwuf/97kBnJmKQCiR8UB6rnVAOz6vzE2VF1SlrpeHUjxMgeunrxU=) 2026-03-07 00:22:43.702926 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBH1p8NXVT63AI5w0en3xgdzqJOcCnyYHj8AkWELWpIBW6In6A7231enBDvYZnfwonCmcAEDyX0FpC/nHE/JL9QM=) 2026-03-07 00:22:43.702940 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKF3ReQNvNG5PMr0WEDZEIzKC1H4NuMlyEwRaD/JaP/w) 2026-03-07 00:22:43.702952 | orchestrator | 2026-03-07 00:22:43.702964 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-07 00:22:43.702977 | orchestrator | Saturday 07 March 2026 00:22:38 +0000 (0:00:01.154) 0:00:07.183 ******** 2026-03-07 00:22:43.703009 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDIF5SNcWzdNlQNtAlFnadRhTrNLh01qButS+y0q3ZfSyQ5eKWi8pOGWqpCZxocb7/lUOb/HYp2fIwPNMhjHX2LpHEitm4rwIYjhoiN7SUCFtqeBpbdAoTffTejkCAarHVbsCdUI6jMVma8655UWRHXOuQNHnpq3inxS51NwoGLHviqVohWuU9SWpCGk9EAvTsxlEZlSW3GTSvWvRV12/VaR8UjU6M9V5mdd4v1kzKU2BRu8N3ITdjU2M0XiTj37Popri3w4YH5BStOCyeq1DAATtZpgXL/DYC+KgFoyunHZYxG8lMJ3MqnguWraSuyhWwuSkWrXFSn7lcRRCiBc7DfckBc5E7bEOR/H87Zj0MH3CYl175o29Lz/CZDvFIamd7e5XwCGXRr71CJDxHuXbiXYku76s9eufevqqz6vQ6ksPZvzGyh/1bDQSf7vWdM1r7GuTDpsJAByhFdkArfl7Dgb+UFJDsoBm8XvlpbGF7PvweIUmUWtarOqOrwCGCS5u8=) 2026-03-07 00:22:43.703023 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFPs7ixVVdf4bwcoP6VtQCbc4TPZhY04XVUr7FXRsg9P) 2026-03-07 00:22:43.703037 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFADWAZMH5fiPBwNF8L0xdmUsBe1vJZrz7V6GKexzvYZVrQoVdAbzc+HApWpfbXOqR1lGRaY2xSWEs6A3RLUMt4=) 2026-03-07 00:22:43.703050 | orchestrator | 2026-03-07 00:22:43.703063 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-07 00:22:43.703076 | orchestrator | Saturday 07 March 2026 00:22:39 +0000 (0:00:01.058) 0:00:08.242 ******** 2026-03-07 00:22:43.703090 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9uxnEfhDpPFNJncQLsVjKPwe/WAkRai8D8mp81S9IM31/UYWweHqBGvVHtPbODA4s1jWGc0yTdazN563b63P7Xm7ikKBklPU1jmOeJNbkWDan4hJVmpF+kF3Pn0RQhfmMk5Gawk8NqXmOOSD22X4t1qsFcigS7y5xtydwePYAdWTGPFqw1ywEKfTnH6ntsrr6iAqR+KlJN26KJPW1/mlfiQ+qGnk4r42DrOhwgylLl5aZhljomAO+rGSPbKcAKS7tFQsnyhsIwvaOgmKfmPtfST6Q5unowEaohiQmn3lPekwPuqA8Xr1k20/7NS7ARpC8/dOLT9hbSGYsIWH7om/OEqAaK8Op4xN6X7WvTVMNho8yDkWGJF5pMLi75IsaRzPVPTFmkYu915vNjNS/2yLioU/95z4HneAKvAG6BJAFWLxNKYn72ILj5XipOXUmPfFDduWlxH6g54E9mQNcYSM8dVOjonBzTLBC5Jyszil6mydsU6po0KNTVx+NYXLALw8=) 2026-03-07 00:22:43.703104 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCDwlwCxbzdjJ6Kdcz7CvcT8hR4v970suURm4EZUf99GDPM0pxtZorq0mrQa76EOW7erwDQa2q+/SwFHkbxmwpI=) 2026-03-07 00:22:43.703117 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILGH+eMJS4GI8Uk9hxNTR4vhckhZ48qSXxBz/zdu/anA) 2026-03-07 00:22:43.703131 | orchestrator | 2026-03-07 00:22:43.703143 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-07 00:22:43.703156 | orchestrator | Saturday 07 March 2026 00:22:40 +0000 (0:00:01.030) 0:00:09.272 ******** 2026-03-07 00:22:43.703170 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCrNYGgTV08JTyNMK1mAQAJ0R1UzZ6QwxDOr97CXxaZc6yOub9TEXEliV3m7wri34ogQzmfsp3937rdxbpE8VTtz8xRlAZPqykHUhoIOKURc1B7JlItFUSXN5uwGnBKN1mhTCFdyUoPdr7N3cjqem9akgLT6EDAm3avzuNNwKVKaXyoWPEg8YqBUzuCK8YobT/Ht1aETbsgAGtdXgPSacL7qR5qVZhhSMHBIRFBPXHFBW+5fSWw/nNJ7ns8tZWnNLPYm3RVG7tvioDlNmB1tCBnCBcNgS8RP7VpUrS8nAUHSf/PhyHV4AH7FNFYsILsaFoxnmyg+yHeaSNaPAtJeyqjWgTQSEWzeWj066YL3v0jyiGXLZmlVlxhp1eOoiKY8OzQ3raicHwQfV1TBox2xRJMpWZokrewBpgFyuX1k+mmkv/EvtSOVT1oSf1PVn3ZV3Z3UaezUJoybebjWM1E9yguWcpMgCp3hSITBxf+oliSfTUPOlRwP4bDNHu5ObviWlk=) 2026-03-07 00:22:43.703183 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLdEKVKEdeU2XqvWzUccZvTG0O+NiQzPHlI5WNn2oid//8X3U7ROQw6p1xycuRxo6vcwTzpoOhU4avLanOw679E=) 2026-03-07 00:22:43.703204 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIFJPuKvnG0wxIIMJTJuhEwWMte7CYKsBVooNpIrCdtM) 2026-03-07 00:22:43.703217 | orchestrator | 2026-03-07 00:22:43.703230 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-07 00:22:43.703243 | orchestrator | Saturday 07 March 2026 00:22:41 +0000 (0:00:01.038) 0:00:10.311 ******** 2026-03-07 00:22:43.703357 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGHkm1WyRofbr+QrHtCalwfxgV/402VMIj3/diviiM1f) 2026-03-07 00:22:43.703370 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2wUb7TGMub4/3h0Nb4ufsPZTK/nr8eZXz16bo996sA188+OcwZCWIRxV88qu07bJoljqvh0aMvk1jW1AGcdbHza+CUkHHrf8EkolISNhgPtm0YDvbBrfNXkumbmNWVp6J/N4TOFLZ1YM7BA0e0w5LgLpTQNUoc6OrtO8bH8hagNbsTpE4HuuMs6UT1Km3kxVc0kUjW2SNWuP+Cg7SNgpWq2TA1Sfy/+rSxGaZ78vmTJkdIav6OSxqM6oYbqfSfGopLYBFHTK4C+iTA8trAyevK60IyYrL5+29BJCs9d3+I0LUYnSxOhuWAbBm5Q8kzK7gvnPGHknSL+30w0d6BzpHo8gwfOzr9ncy/PdTlCqJ4/lWoirNgqrVEs9raMdPVD7+Iu2WIQH2gCrnXfghE0rt+aKxu5FzV5y8XYqEt6RTK8Bil7Hxr4Li4K0eBnCEecZEiYZesbJxpUWr8QQyixjhDMxWTDkJdecBvD3lEKWiR8J7WRhEjmyR03Xw2ke3lHs=) 2026-03-07 00:22:43.703381 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIORAHy+TQCypnA8imr8MxcFYlBIpeMm3ryzi6RcJkUOVqEzc2AHszbyW7W5z9Z6Cwjno65+zbu61SLHucFqXHQ=) 2026-03-07 00:22:43.703392 | orchestrator | 2026-03-07 00:22:43.703403 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-07 00:22:43.703414 | orchestrator | Saturday 07 March 2026 00:22:42 +0000 (0:00:01.032) 0:00:11.344 ******** 2026-03-07 00:22:43.703434 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDrY3ZVTgdbwdiHP6lkCtstFOG6W63XlwDCjThAgtcezRJHHqYH55bviBZ1ZPihRM7UZNU71csI1ybAivfcwExXh8fFFd7VfPEG12N0wDe98TFhsTv6FSI6UVcvm+iixLS2EAtOp4Dg5yylXlr8NcmdRV1jPHca6P5nzu9c8m/RzSr1JYndbNrZx6LrG0f/LXBGU5eiik1b0/zWBDfqq/BQRuqrU33vAFHXusgE6uqbtE//KkQFTv/Ek4TQqJzh6d1sTFr2ExEjNIJk8onUe18mEMS5V6+UIS/CQfw/A3ptP1r41V1SxOaDb27ptsxCDBbAsOrtNWDVK82y9dc7YbKKUWxw7gf/DvcxEbxfLwh/Q19OEui0zQggyBrNMqgv8pftDlID4jIHIN126o9wrTfn04dkGGWtP7scOi/mzoJ7PMp7lF0daJBHv11gHstTL2axDJQODpyNiHPZ9X7ilFCuD8U3R8ndBEPjwlBi8b3ZQ96Eu6oKX17HzZ6ULiRdqPU=) 2026-03-07 00:22:55.052518 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEAkgTmq0Q3VTDLv4TZ5p1oha8mKABSh86YgZTtICRGb0t/EJBjier7tR0jVQGDvNtYQUpSdAo2YJXf2lIpdvOg=) 2026-03-07 00:22:55.052630 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILxpRFb8+5i0lcyL8wSUztPv5G0lFQLOywBYNj03H+VI) 2026-03-07 00:22:55.052671 | orchestrator | 2026-03-07 00:22:55.052683 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-07 00:22:55.052694 | orchestrator | Saturday 07 March 2026 00:22:43 +0000 (0:00:01.007) 0:00:12.352 ******** 2026-03-07 00:22:55.052704 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPnV60DF8Jw7tZLYi6Wx3Ba1cWtGNvBEHVI2LO+350ua) 2026-03-07 00:22:55.052716 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCliOzuEb9lLIbrVMEHXLhtbePWvJN4hM7YZTZosUIfqVCGbMcfEMCjbHDaEQLnwcXgSmnJZDh7CMjUxPYum/G370irvmwioj8yJkBC1S62UoX80FcqkyvSRw0Ro3rlQyvc1a79+VBf7NhAjozk7uhQrmEVDyl1OEtDQKM6lFT4kvPMg/zkxpILsFspMWtcU2B9YqANttlufHZ2of89NX7HX9QdUyeSGZrUv8VRrspRxrJC5Ili4YOmzy/D6tX4n7hbGwxG1Gunwy642RQERionjeP3qd9jG7bKPwpQzLfTnI7qapc7P8K4UHEx/kEgqKah2GEdz67RP+p0PM4QGpjIh5xQxMxIEzoPbqTEj1vYYgB20X4IEeuLyikocUSSWE4mBD4YrakWnqrSb+n0b1yyJY9JGNeQXQrTrqg4Uzfzy334pfFIACdWTs6Q/F5wa6xTUC1ZGnW5mWLMKSIkGWC7aViKNJBJFZAg8eWwWde78wMQSPgmRzQlKgzRS8y2U0U=) 2026-03-07 00:22:55.052753 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMcu69nM8UBIrQBALzYqIQdKskbf0JWWJ1xgYfrm8oxPuzR07CBo+WT5S4wdv64OcvqejarNZ7ZNBf+hU2ZLB98=) 2026-03-07 00:22:55.052763 | orchestrator | 2026-03-07 00:22:55.052773 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-03-07 00:22:55.052783 | orchestrator | Saturday 07 March 2026 00:22:45 +0000 (0:00:01.991) 0:00:14.343 ******** 2026-03-07 00:22:55.052793 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-07 00:22:55.052803 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-07 00:22:55.052813 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-07 00:22:55.052822 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-07 00:22:55.052832 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-07 00:22:55.052841 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-07 00:22:55.052851 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-07 00:22:55.052873 | orchestrator | 2026-03-07 00:22:55.052883 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-03-07 00:22:55.052904 | orchestrator | Saturday 07 March 2026 00:22:50 +0000 (0:00:05.216) 0:00:19.559 ******** 2026-03-07 00:22:55.052915 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-07 00:22:55.052927 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-07 00:22:55.052937 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-07 00:22:55.052947 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-07 00:22:55.052956 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-07 00:22:55.052966 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-07 00:22:55.052976 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-07 00:22:55.052985 | orchestrator | 2026-03-07 00:22:55.052995 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-07 00:22:55.053005 | orchestrator | Saturday 07 March 2026 00:22:51 +0000 (0:00:00.177) 0:00:19.736 ******** 2026-03-07 00:22:55.053015 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKF3ReQNvNG5PMr0WEDZEIzKC1H4NuMlyEwRaD/JaP/w) 2026-03-07 00:22:55.053069 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDHbnUnvA0U/2jmgkmEKrw7B69meA/HA7Vvn+Ac3glYp0Wfbg6lLnVLOjZP1UxZs5IwVfhPamN/NI3ehGrhaaLJlYRMtvqrEoGD/V0Rx5ZMgAmBbgHRi2xE3RnGFRI7dMgOcA0QIbPVsYRkVKfvEC3d4rwvVoO1WR7442e33Sz0tq4bQ+T3f6nt0INYMQQxrF4IhLcmRspG0oFP2mlGw8L7ZjPJ5hLYukz4VFAhyLFHhmIxD2Xkf2POdhIQ6UzX1QYe7Mk/zpuCZCKzk4HsbJXuCpxqxQExy/NI0aDIoWXzAw13hKukkrpQAhXOHsk02Xk3uvLuoVrx4nGVib+prjXhqditfLJwhYYPx0kaidOL++pGL0QuGPuPEuxPAbgRl0Tn75LRZ/u4L57aXmarDkrAESTXfP4OzvzixUVvLs4TAmEr8d5iSHc+fgA562sfQ7yOUcjbGq0el08Vwuf/97kBnJmKQCiR8UB6rnVAOz6vzE2VF1SlrpeHUjxMgeunrxU=) 2026-03-07 00:22:55.053083 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBH1p8NXVT63AI5w0en3xgdzqJOcCnyYHj8AkWELWpIBW6In6A7231enBDvYZnfwonCmcAEDyX0FpC/nHE/JL9QM=) 2026-03-07 00:22:55.053103 | orchestrator | 2026-03-07 00:22:55.053114 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-07 00:22:55.053127 | orchestrator | Saturday 07 March 2026 00:22:52 +0000 (0:00:00.967) 0:00:20.704 ******** 2026-03-07 00:22:55.053139 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDIF5SNcWzdNlQNtAlFnadRhTrNLh01qButS+y0q3ZfSyQ5eKWi8pOGWqpCZxocb7/lUOb/HYp2fIwPNMhjHX2LpHEitm4rwIYjhoiN7SUCFtqeBpbdAoTffTejkCAarHVbsCdUI6jMVma8655UWRHXOuQNHnpq3inxS51NwoGLHviqVohWuU9SWpCGk9EAvTsxlEZlSW3GTSvWvRV12/VaR8UjU6M9V5mdd4v1kzKU2BRu8N3ITdjU2M0XiTj37Popri3w4YH5BStOCyeq1DAATtZpgXL/DYC+KgFoyunHZYxG8lMJ3MqnguWraSuyhWwuSkWrXFSn7lcRRCiBc7DfckBc5E7bEOR/H87Zj0MH3CYl175o29Lz/CZDvFIamd7e5XwCGXRr71CJDxHuXbiXYku76s9eufevqqz6vQ6ksPZvzGyh/1bDQSf7vWdM1r7GuTDpsJAByhFdkArfl7Dgb+UFJDsoBm8XvlpbGF7PvweIUmUWtarOqOrwCGCS5u8=) 2026-03-07 00:22:55.053150 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFADWAZMH5fiPBwNF8L0xdmUsBe1vJZrz7V6GKexzvYZVrQoVdAbzc+HApWpfbXOqR1lGRaY2xSWEs6A3RLUMt4=) 2026-03-07 00:22:55.053163 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFPs7ixVVdf4bwcoP6VtQCbc4TPZhY04XVUr7FXRsg9P) 2026-03-07 00:22:55.053174 | orchestrator | 2026-03-07 00:22:55.053185 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-07 00:22:55.053196 | orchestrator | Saturday 07 March 2026 00:22:53 +0000 (0:00:00.997) 0:00:21.701 ******** 2026-03-07 00:22:55.053208 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCDwlwCxbzdjJ6Kdcz7CvcT8hR4v970suURm4EZUf99GDPM0pxtZorq0mrQa76EOW7erwDQa2q+/SwFHkbxmwpI=) 2026-03-07 00:22:55.053220 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9uxnEfhDpPFNJncQLsVjKPwe/WAkRai8D8mp81S9IM31/UYWweHqBGvVHtPbODA4s1jWGc0yTdazN563b63P7Xm7ikKBklPU1jmOeJNbkWDan4hJVmpF+kF3Pn0RQhfmMk5Gawk8NqXmOOSD22X4t1qsFcigS7y5xtydwePYAdWTGPFqw1ywEKfTnH6ntsrr6iAqR+KlJN26KJPW1/mlfiQ+qGnk4r42DrOhwgylLl5aZhljomAO+rGSPbKcAKS7tFQsnyhsIwvaOgmKfmPtfST6Q5unowEaohiQmn3lPekwPuqA8Xr1k20/7NS7ARpC8/dOLT9hbSGYsIWH7om/OEqAaK8Op4xN6X7WvTVMNho8yDkWGJF5pMLi75IsaRzPVPTFmkYu915vNjNS/2yLioU/95z4HneAKvAG6BJAFWLxNKYn72ILj5XipOXUmPfFDduWlxH6g54E9mQNcYSM8dVOjonBzTLBC5Jyszil6mydsU6po0KNTVx+NYXLALw8=) 2026-03-07 00:22:55.053232 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILGH+eMJS4GI8Uk9hxNTR4vhckhZ48qSXxBz/zdu/anA) 2026-03-07 00:22:55.053243 | orchestrator | 2026-03-07 00:22:55.053255 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-07 00:22:55.053266 | orchestrator | Saturday 07 March 2026 00:22:54 +0000 (0:00:01.000) 0:00:22.702 ******** 2026-03-07 00:22:55.053277 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLdEKVKEdeU2XqvWzUccZvTG0O+NiQzPHlI5WNn2oid//8X3U7ROQw6p1xycuRxo6vcwTzpoOhU4avLanOw679E=) 2026-03-07 00:22:55.053288 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIFJPuKvnG0wxIIMJTJuhEwWMte7CYKsBVooNpIrCdtM) 2026-03-07 00:22:55.053308 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCrNYGgTV08JTyNMK1mAQAJ0R1UzZ6QwxDOr97CXxaZc6yOub9TEXEliV3m7wri34ogQzmfsp3937rdxbpE8VTtz8xRlAZPqykHUhoIOKURc1B7JlItFUSXN5uwGnBKN1mhTCFdyUoPdr7N3cjqem9akgLT6EDAm3avzuNNwKVKaXyoWPEg8YqBUzuCK8YobT/Ht1aETbsgAGtdXgPSacL7qR5qVZhhSMHBIRFBPXHFBW+5fSWw/nNJ7ns8tZWnNLPYm3RVG7tvioDlNmB1tCBnCBcNgS8RP7VpUrS8nAUHSf/PhyHV4AH7FNFYsILsaFoxnmyg+yHeaSNaPAtJeyqjWgTQSEWzeWj066YL3v0jyiGXLZmlVlxhp1eOoiKY8OzQ3raicHwQfV1TBox2xRJMpWZokrewBpgFyuX1k+mmkv/EvtSOVT1oSf1PVn3ZV3Z3UaezUJoybebjWM1E9yguWcpMgCp3hSITBxf+oliSfTUPOlRwP4bDNHu5ObviWlk=) 2026-03-07 00:22:59.228616 | orchestrator | 2026-03-07 00:22:59.228808 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-07 00:22:59.228828 | orchestrator | Saturday 07 March 2026 00:22:55 +0000 (0:00:00.996) 0:00:23.698 ******** 2026-03-07 00:22:59.228843 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2wUb7TGMub4/3h0Nb4ufsPZTK/nr8eZXz16bo996sA188+OcwZCWIRxV88qu07bJoljqvh0aMvk1jW1AGcdbHza+CUkHHrf8EkolISNhgPtm0YDvbBrfNXkumbmNWVp6J/N4TOFLZ1YM7BA0e0w5LgLpTQNUoc6OrtO8bH8hagNbsTpE4HuuMs6UT1Km3kxVc0kUjW2SNWuP+Cg7SNgpWq2TA1Sfy/+rSxGaZ78vmTJkdIav6OSxqM6oYbqfSfGopLYBFHTK4C+iTA8trAyevK60IyYrL5+29BJCs9d3+I0LUYnSxOhuWAbBm5Q8kzK7gvnPGHknSL+30w0d6BzpHo8gwfOzr9ncy/PdTlCqJ4/lWoirNgqrVEs9raMdPVD7+Iu2WIQH2gCrnXfghE0rt+aKxu5FzV5y8XYqEt6RTK8Bil7Hxr4Li4K0eBnCEecZEiYZesbJxpUWr8QQyixjhDMxWTDkJdecBvD3lEKWiR8J7WRhEjmyR03Xw2ke3lHs=) 2026-03-07 00:22:59.228860 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIORAHy+TQCypnA8imr8MxcFYlBIpeMm3ryzi6RcJkUOVqEzc2AHszbyW7W5z9Z6Cwjno65+zbu61SLHucFqXHQ=) 2026-03-07 00:22:59.228874 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGHkm1WyRofbr+QrHtCalwfxgV/402VMIj3/diviiM1f) 2026-03-07 00:22:59.228886 | orchestrator | 2026-03-07 00:22:59.228898 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-07 00:22:59.228909 | orchestrator | Saturday 07 March 2026 00:22:56 +0000 (0:00:01.000) 0:00:24.699 ******** 2026-03-07 00:22:59.228934 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDrY3ZVTgdbwdiHP6lkCtstFOG6W63XlwDCjThAgtcezRJHHqYH55bviBZ1ZPihRM7UZNU71csI1ybAivfcwExXh8fFFd7VfPEG12N0wDe98TFhsTv6FSI6UVcvm+iixLS2EAtOp4Dg5yylXlr8NcmdRV1jPHca6P5nzu9c8m/RzSr1JYndbNrZx6LrG0f/LXBGU5eiik1b0/zWBDfqq/BQRuqrU33vAFHXusgE6uqbtE//KkQFTv/Ek4TQqJzh6d1sTFr2ExEjNIJk8onUe18mEMS5V6+UIS/CQfw/A3ptP1r41V1SxOaDb27ptsxCDBbAsOrtNWDVK82y9dc7YbKKUWxw7gf/DvcxEbxfLwh/Q19OEui0zQggyBrNMqgv8pftDlID4jIHIN126o9wrTfn04dkGGWtP7scOi/mzoJ7PMp7lF0daJBHv11gHstTL2axDJQODpyNiHPZ9X7ilFCuD8U3R8ndBEPjwlBi8b3ZQ96Eu6oKX17HzZ6ULiRdqPU=) 2026-03-07 00:22:59.229796 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEAkgTmq0Q3VTDLv4TZ5p1oha8mKABSh86YgZTtICRGb0t/EJBjier7tR0jVQGDvNtYQUpSdAo2YJXf2lIpdvOg=) 2026-03-07 00:22:59.229883 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILxpRFb8+5i0lcyL8wSUztPv5G0lFQLOywBYNj03H+VI) 2026-03-07 00:22:59.229899 | orchestrator | 2026-03-07 00:22:59.229912 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-07 00:22:59.229925 | orchestrator | Saturday 07 March 2026 00:22:57 +0000 (0:00:00.993) 0:00:25.692 ******** 2026-03-07 00:22:59.229936 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMcu69nM8UBIrQBALzYqIQdKskbf0JWWJ1xgYfrm8oxPuzR07CBo+WT5S4wdv64OcvqejarNZ7ZNBf+hU2ZLB98=) 2026-03-07 00:22:59.229972 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCliOzuEb9lLIbrVMEHXLhtbePWvJN4hM7YZTZosUIfqVCGbMcfEMCjbHDaEQLnwcXgSmnJZDh7CMjUxPYum/G370irvmwioj8yJkBC1S62UoX80FcqkyvSRw0Ro3rlQyvc1a79+VBf7NhAjozk7uhQrmEVDyl1OEtDQKM6lFT4kvPMg/zkxpILsFspMWtcU2B9YqANttlufHZ2of89NX7HX9QdUyeSGZrUv8VRrspRxrJC5Ili4YOmzy/D6tX4n7hbGwxG1Gunwy642RQERionjeP3qd9jG7bKPwpQzLfTnI7qapc7P8K4UHEx/kEgqKah2GEdz67RP+p0PM4QGpjIh5xQxMxIEzoPbqTEj1vYYgB20X4IEeuLyikocUSSWE4mBD4YrakWnqrSb+n0b1yyJY9JGNeQXQrTrqg4Uzfzy334pfFIACdWTs6Q/F5wa6xTUC1ZGnW5mWLMKSIkGWC7aViKNJBJFZAg8eWwWde78wMQSPgmRzQlKgzRS8y2U0U=) 2026-03-07 00:22:59.229987 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPnV60DF8Jw7tZLYi6Wx3Ba1cWtGNvBEHVI2LO+350ua) 2026-03-07 00:22:59.229998 | orchestrator | 2026-03-07 00:22:59.230009 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-03-07 00:22:59.230110 | orchestrator | Saturday 07 March 2026 00:22:58 +0000 (0:00:01.016) 0:00:26.709 ******** 2026-03-07 00:22:59.230126 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-07 00:22:59.230137 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-07 00:22:59.230148 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-07 00:22:59.230159 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-07 00:22:59.230170 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-07 00:22:59.230181 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-07 00:22:59.230192 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-07 00:22:59.230203 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:22:59.230214 | orchestrator | 2026-03-07 00:22:59.230286 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-03-07 00:22:59.230301 | orchestrator | Saturday 07 March 2026 00:22:58 +0000 (0:00:00.160) 0:00:26.869 ******** 2026-03-07 00:22:59.230312 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:22:59.230323 | orchestrator | 2026-03-07 00:22:59.230334 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-03-07 00:22:59.230346 | orchestrator | Saturday 07 March 2026 00:22:58 +0000 (0:00:00.056) 0:00:26.926 ******** 2026-03-07 00:22:59.230363 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:22:59.230375 | orchestrator | 2026-03-07 00:22:59.230386 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-03-07 00:22:59.230396 | orchestrator | Saturday 07 March 2026 00:22:58 +0000 (0:00:00.063) 0:00:26.990 ******** 2026-03-07 00:22:59.230407 | orchestrator | changed: [testbed-manager] 2026-03-07 00:22:59.230418 | orchestrator | 2026-03-07 00:22:59.230429 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:22:59.230441 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-07 00:22:59.230453 | orchestrator | 2026-03-07 00:22:59.230464 | orchestrator | 2026-03-07 00:22:59.230475 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:22:59.230486 | orchestrator | Saturday 07 March 2026 00:22:59 +0000 (0:00:00.708) 0:00:27.698 ******** 2026-03-07 00:22:59.230496 | orchestrator | =============================================================================== 2026-03-07 00:22:59.230507 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.72s 2026-03-07 00:22:59.230518 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.22s 2026-03-07 00:22:59.230530 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.99s 2026-03-07 00:22:59.230541 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2026-03-07 00:22:59.230552 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-03-07 00:22:59.230563 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-03-07 00:22:59.230574 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-03-07 00:22:59.230585 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-03-07 00:22:59.230596 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-03-07 00:22:59.230607 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2026-03-07 00:22:59.230618 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-03-07 00:22:59.230659 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-03-07 00:22:59.230672 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-03-07 00:22:59.230683 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-03-07 00:22:59.230694 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2026-03-07 00:22:59.230714 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.97s 2026-03-07 00:22:59.230725 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.71s 2026-03-07 00:22:59.230735 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.18s 2026-03-07 00:22:59.230747 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.16s 2026-03-07 00:22:59.230758 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.16s 2026-03-07 00:22:59.500506 | orchestrator | + osism apply squid 2026-03-07 00:23:11.547168 | orchestrator | 2026-03-07 00:23:11 | INFO  | Task 0164ef65-3147-4a89-85a0-409427309331 (squid) was prepared for execution. 2026-03-07 00:23:11.547300 | orchestrator | 2026-03-07 00:23:11 | INFO  | It takes a moment until task 0164ef65-3147-4a89-85a0-409427309331 (squid) has been started and output is visible here. 2026-03-07 00:25:03.438214 | orchestrator | 2026-03-07 00:25:03.438332 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-03-07 00:25:03.438345 | orchestrator | 2026-03-07 00:25:03.438355 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-03-07 00:25:03.438365 | orchestrator | Saturday 07 March 2026 00:23:15 +0000 (0:00:00.116) 0:00:00.116 ******** 2026-03-07 00:25:03.438374 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-03-07 00:25:03.438384 | orchestrator | 2026-03-07 00:25:03.438393 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-03-07 00:25:03.438402 | orchestrator | Saturday 07 March 2026 00:23:15 +0000 (0:00:00.063) 0:00:00.180 ******** 2026-03-07 00:25:03.438411 | orchestrator | ok: [testbed-manager] 2026-03-07 00:25:03.438421 | orchestrator | 2026-03-07 00:25:03.438458 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-03-07 00:25:03.438467 | orchestrator | Saturday 07 March 2026 00:23:16 +0000 (0:00:01.099) 0:00:01.279 ******** 2026-03-07 00:25:03.438477 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-03-07 00:25:03.438487 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-03-07 00:25:03.438496 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-03-07 00:25:03.438505 | orchestrator | 2026-03-07 00:25:03.438514 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-03-07 00:25:03.438523 | orchestrator | Saturday 07 March 2026 00:23:17 +0000 (0:00:01.003) 0:00:02.282 ******** 2026-03-07 00:25:03.438532 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-03-07 00:25:03.438540 | orchestrator | 2026-03-07 00:25:03.438549 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-03-07 00:25:03.438558 | orchestrator | Saturday 07 March 2026 00:23:18 +0000 (0:00:00.940) 0:00:03.223 ******** 2026-03-07 00:25:03.438567 | orchestrator | ok: [testbed-manager] 2026-03-07 00:25:03.438576 | orchestrator | 2026-03-07 00:25:03.438585 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-03-07 00:25:03.438594 | orchestrator | Saturday 07 March 2026 00:23:18 +0000 (0:00:00.334) 0:00:03.558 ******** 2026-03-07 00:25:03.438602 | orchestrator | changed: [testbed-manager] 2026-03-07 00:25:03.438619 | orchestrator | 2026-03-07 00:25:03.438649 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-03-07 00:25:03.438676 | orchestrator | Saturday 07 March 2026 00:23:19 +0000 (0:00:00.853) 0:00:04.412 ******** 2026-03-07 00:25:03.438689 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-03-07 00:25:03.438705 | orchestrator | ok: [testbed-manager] 2026-03-07 00:25:03.438722 | orchestrator | 2026-03-07 00:25:03.438737 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-03-07 00:25:03.438752 | orchestrator | Saturday 07 March 2026 00:23:50 +0000 (0:00:31.007) 0:00:35.419 ******** 2026-03-07 00:25:03.438794 | orchestrator | changed: [testbed-manager] 2026-03-07 00:25:03.438809 | orchestrator | 2026-03-07 00:25:03.438825 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-03-07 00:25:03.438840 | orchestrator | Saturday 07 March 2026 00:24:02 +0000 (0:00:11.852) 0:00:47.272 ******** 2026-03-07 00:25:03.438854 | orchestrator | Pausing for 60 seconds 2026-03-07 00:25:03.438871 | orchestrator | changed: [testbed-manager] 2026-03-07 00:25:03.438887 | orchestrator | 2026-03-07 00:25:03.438903 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-03-07 00:25:03.438919 | orchestrator | Saturday 07 March 2026 00:25:02 +0000 (0:01:00.100) 0:01:47.372 ******** 2026-03-07 00:25:03.438935 | orchestrator | ok: [testbed-manager] 2026-03-07 00:25:03.438950 | orchestrator | 2026-03-07 00:25:03.438966 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-03-07 00:25:03.438982 | orchestrator | Saturday 07 March 2026 00:25:02 +0000 (0:00:00.063) 0:01:47.436 ******** 2026-03-07 00:25:03.438998 | orchestrator | changed: [testbed-manager] 2026-03-07 00:25:03.439008 | orchestrator | 2026-03-07 00:25:03.439019 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:25:03.439031 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:25:03.439041 | orchestrator | 2026-03-07 00:25:03.439051 | orchestrator | 2026-03-07 00:25:03.439062 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:25:03.439073 | orchestrator | Saturday 07 March 2026 00:25:03 +0000 (0:00:00.603) 0:01:48.040 ******** 2026-03-07 00:25:03.439084 | orchestrator | =============================================================================== 2026-03-07 00:25:03.439094 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.10s 2026-03-07 00:25:03.439104 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.01s 2026-03-07 00:25:03.439114 | orchestrator | osism.services.squid : Restart squid service --------------------------- 11.85s 2026-03-07 00:25:03.439141 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.10s 2026-03-07 00:25:03.439150 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.00s 2026-03-07 00:25:03.439159 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 0.94s 2026-03-07 00:25:03.439168 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.85s 2026-03-07 00:25:03.439176 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.60s 2026-03-07 00:25:03.439184 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.33s 2026-03-07 00:25:03.439193 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2026-03-07 00:25:03.439202 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.06s 2026-03-07 00:25:03.711374 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-03-07 00:25:03.711805 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-03-07 00:25:03.761694 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-07 00:25:03.761788 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-03-07 00:25:03.767376 | orchestrator | + set -e 2026-03-07 00:25:03.767500 | orchestrator | + NAMESPACE=kolla/release 2026-03-07 00:25:03.767516 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-03-07 00:25:03.773951 | orchestrator | ++ semver 9.5.0 9.0.0 2026-03-07 00:25:03.838327 | orchestrator | + [[ 1 -lt 0 ]] 2026-03-07 00:25:03.838470 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-03-07 00:25:15.831308 | orchestrator | 2026-03-07 00:25:15 | INFO  | Task 85a8dd48-8735-4d9f-af71-c16bda3052bd (operator) was prepared for execution. 2026-03-07 00:25:15.831468 | orchestrator | 2026-03-07 00:25:15 | INFO  | It takes a moment until task 85a8dd48-8735-4d9f-af71-c16bda3052bd (operator) has been started and output is visible here. 2026-03-07 00:25:31.749030 | orchestrator | 2026-03-07 00:25:31.749151 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-03-07 00:25:31.749167 | orchestrator | 2026-03-07 00:25:31.749179 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-07 00:25:31.749191 | orchestrator | Saturday 07 March 2026 00:25:19 +0000 (0:00:00.113) 0:00:00.113 ******** 2026-03-07 00:25:31.749202 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:25:31.749214 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:25:31.749225 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:25:31.749236 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:25:31.749246 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:25:31.749257 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:25:31.749268 | orchestrator | 2026-03-07 00:25:31.749279 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-03-07 00:25:31.749290 | orchestrator | Saturday 07 March 2026 00:25:22 +0000 (0:00:03.227) 0:00:03.340 ******** 2026-03-07 00:25:31.749301 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:25:31.749312 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:25:31.749322 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:25:31.749333 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:25:31.749359 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:25:31.749371 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:25:31.749479 | orchestrator | 2026-03-07 00:25:31.749491 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-03-07 00:25:31.749502 | orchestrator | 2026-03-07 00:25:31.749513 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-07 00:25:31.749524 | orchestrator | Saturday 07 March 2026 00:25:23 +0000 (0:00:00.772) 0:00:04.113 ******** 2026-03-07 00:25:31.749535 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:25:31.749546 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:25:31.749557 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:25:31.749570 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:25:31.749583 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:25:31.749595 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:25:31.749609 | orchestrator | 2026-03-07 00:25:31.749621 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-07 00:25:31.749634 | orchestrator | Saturday 07 March 2026 00:25:23 +0000 (0:00:00.171) 0:00:04.284 ******** 2026-03-07 00:25:31.749647 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:25:31.749672 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:25:31.749684 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:25:31.749697 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:25:31.749710 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:25:31.749722 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:25:31.749734 | orchestrator | 2026-03-07 00:25:31.749747 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-07 00:25:31.749760 | orchestrator | Saturday 07 March 2026 00:25:24 +0000 (0:00:00.171) 0:00:04.456 ******** 2026-03-07 00:25:31.749772 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:25:31.749786 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:25:31.749800 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:25:31.749812 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:25:31.749824 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:25:31.749836 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:25:31.749849 | orchestrator | 2026-03-07 00:25:31.749862 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-07 00:25:31.749875 | orchestrator | Saturday 07 March 2026 00:25:24 +0000 (0:00:00.731) 0:00:05.188 ******** 2026-03-07 00:25:31.749887 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:25:31.749899 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:25:31.749910 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:25:31.749920 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:25:31.749931 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:25:31.749942 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:25:31.749953 | orchestrator | 2026-03-07 00:25:31.749964 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-07 00:25:31.749998 | orchestrator | Saturday 07 March 2026 00:25:25 +0000 (0:00:00.927) 0:00:06.115 ******** 2026-03-07 00:25:31.750010 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-03-07 00:25:31.750086 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-03-07 00:25:31.750098 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-03-07 00:25:31.750109 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-03-07 00:25:31.750120 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-03-07 00:25:31.750131 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-03-07 00:25:31.750142 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-03-07 00:25:31.750153 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-03-07 00:25:31.750163 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-03-07 00:25:31.750174 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-03-07 00:25:31.750226 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-03-07 00:25:31.750248 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-03-07 00:25:31.750269 | orchestrator | 2026-03-07 00:25:31.750281 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-07 00:25:31.750292 | orchestrator | Saturday 07 March 2026 00:25:27 +0000 (0:00:01.297) 0:00:07.412 ******** 2026-03-07 00:25:31.750303 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:25:31.750314 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:25:31.750324 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:25:31.750335 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:25:31.750345 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:25:31.750356 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:25:31.750367 | orchestrator | 2026-03-07 00:25:31.750403 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-07 00:25:31.750416 | orchestrator | Saturday 07 March 2026 00:25:28 +0000 (0:00:01.203) 0:00:08.615 ******** 2026-03-07 00:25:31.750428 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-03-07 00:25:31.750439 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-03-07 00:25:31.750450 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-03-07 00:25:31.750461 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-03-07 00:25:31.750493 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-03-07 00:25:31.750505 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-03-07 00:25:31.750516 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-03-07 00:25:31.750526 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-03-07 00:25:31.750537 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-03-07 00:25:31.750548 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-03-07 00:25:31.750558 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-03-07 00:25:31.750569 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-03-07 00:25:31.750580 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-03-07 00:25:31.750591 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-03-07 00:25:31.750601 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-03-07 00:25:31.750612 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-03-07 00:25:31.750623 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-03-07 00:25:31.750633 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-03-07 00:25:31.750644 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-03-07 00:25:31.750655 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-03-07 00:25:31.750678 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-03-07 00:25:31.750689 | orchestrator | 2026-03-07 00:25:31.750700 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-07 00:25:31.750712 | orchestrator | Saturday 07 March 2026 00:25:29 +0000 (0:00:01.296) 0:00:09.912 ******** 2026-03-07 00:25:31.750723 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:25:31.750733 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:25:31.750744 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:25:31.750755 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:25:31.750765 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:25:31.750776 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:25:31.750787 | orchestrator | 2026-03-07 00:25:31.750798 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-07 00:25:31.750809 | orchestrator | Saturday 07 March 2026 00:25:29 +0000 (0:00:00.146) 0:00:10.059 ******** 2026-03-07 00:25:31.750820 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:25:31.750830 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:25:31.750841 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:25:31.750852 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:25:31.750862 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:25:31.750873 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:25:31.750884 | orchestrator | 2026-03-07 00:25:31.750895 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-07 00:25:31.750906 | orchestrator | Saturday 07 March 2026 00:25:29 +0000 (0:00:00.176) 0:00:10.235 ******** 2026-03-07 00:25:31.750917 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:25:31.750927 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:25:31.750938 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:25:31.750949 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:25:31.750959 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:25:31.750970 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:25:31.750981 | orchestrator | 2026-03-07 00:25:31.750991 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-07 00:25:31.751002 | orchestrator | Saturday 07 March 2026 00:25:30 +0000 (0:00:00.706) 0:00:10.942 ******** 2026-03-07 00:25:31.751013 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:25:31.751074 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:25:31.751087 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:25:31.751098 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:25:31.751109 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:25:31.751120 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:25:31.751131 | orchestrator | 2026-03-07 00:25:31.751142 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-07 00:25:31.751153 | orchestrator | Saturday 07 March 2026 00:25:30 +0000 (0:00:00.181) 0:00:11.124 ******** 2026-03-07 00:25:31.751164 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-07 00:25:31.751185 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:25:31.751196 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-07 00:25:31.751207 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-07 00:25:31.751218 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:25:31.751229 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-07 00:25:31.751240 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:25:31.751251 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:25:31.751262 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-07 00:25:31.751273 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:25:31.751284 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-07 00:25:31.751295 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:25:31.751306 | orchestrator | 2026-03-07 00:25:31.751317 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-07 00:25:31.751328 | orchestrator | Saturday 07 March 2026 00:25:31 +0000 (0:00:00.726) 0:00:11.851 ******** 2026-03-07 00:25:31.751347 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:25:31.751358 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:25:31.751369 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:25:31.751403 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:25:31.751414 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:25:31.751425 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:25:31.751436 | orchestrator | 2026-03-07 00:25:31.751447 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-07 00:25:31.751457 | orchestrator | Saturday 07 March 2026 00:25:31 +0000 (0:00:00.149) 0:00:12.000 ******** 2026-03-07 00:25:31.751468 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:25:31.751479 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:25:31.751490 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:25:31.751500 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:25:31.751520 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:25:33.036928 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:25:33.037023 | orchestrator | 2026-03-07 00:25:33.037034 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-07 00:25:33.037042 | orchestrator | Saturday 07 March 2026 00:25:31 +0000 (0:00:00.141) 0:00:12.142 ******** 2026-03-07 00:25:33.037049 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:25:33.037056 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:25:33.037062 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:25:33.037069 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:25:33.037075 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:25:33.037082 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:25:33.037089 | orchestrator | 2026-03-07 00:25:33.037095 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-07 00:25:33.037102 | orchestrator | Saturday 07 March 2026 00:25:31 +0000 (0:00:00.161) 0:00:12.303 ******** 2026-03-07 00:25:33.037109 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:25:33.037120 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:25:33.037127 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:25:33.037150 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:25:33.037156 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:25:33.037162 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:25:33.037167 | orchestrator | 2026-03-07 00:25:33.037172 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-07 00:25:33.037179 | orchestrator | Saturday 07 March 2026 00:25:32 +0000 (0:00:00.663) 0:00:12.966 ******** 2026-03-07 00:25:33.037184 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:25:33.037190 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:25:33.037196 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:25:33.037202 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:25:33.037208 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:25:33.037215 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:25:33.037220 | orchestrator | 2026-03-07 00:25:33.037226 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:25:33.037234 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-07 00:25:33.037242 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-07 00:25:33.037248 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-07 00:25:33.037254 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-07 00:25:33.037261 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-07 00:25:33.037287 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-07 00:25:33.037293 | orchestrator | 2026-03-07 00:25:33.037298 | orchestrator | 2026-03-07 00:25:33.037304 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:25:33.037310 | orchestrator | Saturday 07 March 2026 00:25:32 +0000 (0:00:00.218) 0:00:13.185 ******** 2026-03-07 00:25:33.037316 | orchestrator | =============================================================================== 2026-03-07 00:25:33.037321 | orchestrator | Gathering Facts --------------------------------------------------------- 3.23s 2026-03-07 00:25:33.037327 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.30s 2026-03-07 00:25:33.037334 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.30s 2026-03-07 00:25:33.037341 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.20s 2026-03-07 00:25:33.037346 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.93s 2026-03-07 00:25:33.037352 | orchestrator | Do not require tty for all users ---------------------------------------- 0.77s 2026-03-07 00:25:33.037358 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.73s 2026-03-07 00:25:33.037363 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.73s 2026-03-07 00:25:33.037369 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.71s 2026-03-07 00:25:33.037442 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.66s 2026-03-07 00:25:33.037450 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.22s 2026-03-07 00:25:33.037456 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.18s 2026-03-07 00:25:33.037462 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.18s 2026-03-07 00:25:33.037468 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.17s 2026-03-07 00:25:33.037474 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.17s 2026-03-07 00:25:33.037480 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.16s 2026-03-07 00:25:33.037486 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.15s 2026-03-07 00:25:33.037493 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.15s 2026-03-07 00:25:33.037501 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.14s 2026-03-07 00:25:33.303361 | orchestrator | + osism apply --environment custom facts 2026-03-07 00:25:35.147490 | orchestrator | 2026-03-07 00:25:35 | INFO  | Trying to run play facts in environment custom 2026-03-07 00:25:45.336183 | orchestrator | 2026-03-07 00:25:45 | INFO  | Task d3371ec6-d427-4f35-97fa-1823cb14be30 (facts) was prepared for execution. 2026-03-07 00:25:45.336332 | orchestrator | 2026-03-07 00:25:45 | INFO  | It takes a moment until task d3371ec6-d427-4f35-97fa-1823cb14be30 (facts) has been started and output is visible here. 2026-03-07 00:26:29.220202 | orchestrator | 2026-03-07 00:26:29.220358 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-03-07 00:26:29.220375 | orchestrator | 2026-03-07 00:26:29.220387 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-07 00:26:29.220399 | orchestrator | Saturday 07 March 2026 00:25:49 +0000 (0:00:00.080) 0:00:00.080 ******** 2026-03-07 00:26:29.220410 | orchestrator | ok: [testbed-manager] 2026-03-07 00:26:29.220423 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:26:29.220434 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:26:29.220460 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:26:29.220482 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:26:29.220493 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:26:29.220504 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:26:29.220539 | orchestrator | 2026-03-07 00:26:29.220552 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-03-07 00:26:29.220563 | orchestrator | Saturday 07 March 2026 00:25:50 +0000 (0:00:01.386) 0:00:01.467 ******** 2026-03-07 00:26:29.220574 | orchestrator | ok: [testbed-manager] 2026-03-07 00:26:29.220585 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:26:29.220596 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:26:29.220606 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:26:29.220618 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:26:29.220628 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:26:29.220639 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:26:29.220650 | orchestrator | 2026-03-07 00:26:29.220660 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-03-07 00:26:29.220671 | orchestrator | 2026-03-07 00:26:29.220682 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-07 00:26:29.220693 | orchestrator | Saturday 07 March 2026 00:25:51 +0000 (0:00:01.219) 0:00:02.686 ******** 2026-03-07 00:26:29.220704 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:26:29.220714 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:26:29.220725 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:26:29.220736 | orchestrator | 2026-03-07 00:26:29.220750 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-07 00:26:29.220764 | orchestrator | Saturday 07 March 2026 00:25:52 +0000 (0:00:00.085) 0:00:02.772 ******** 2026-03-07 00:26:29.220776 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:26:29.220788 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:26:29.220801 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:26:29.220813 | orchestrator | 2026-03-07 00:26:29.220826 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-07 00:26:29.220838 | orchestrator | Saturday 07 March 2026 00:25:52 +0000 (0:00:00.186) 0:00:02.958 ******** 2026-03-07 00:26:29.220850 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:26:29.220864 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:26:29.220876 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:26:29.220888 | orchestrator | 2026-03-07 00:26:29.220901 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-07 00:26:29.220914 | orchestrator | Saturday 07 March 2026 00:25:52 +0000 (0:00:00.204) 0:00:03.163 ******** 2026-03-07 00:26:29.220929 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:26:29.220942 | orchestrator | 2026-03-07 00:26:29.220954 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-07 00:26:29.220967 | orchestrator | Saturday 07 March 2026 00:25:52 +0000 (0:00:00.131) 0:00:03.294 ******** 2026-03-07 00:26:29.220981 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:26:29.220994 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:26:29.221004 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:26:29.221015 | orchestrator | 2026-03-07 00:26:29.221026 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-07 00:26:29.221037 | orchestrator | Saturday 07 March 2026 00:25:52 +0000 (0:00:00.447) 0:00:03.742 ******** 2026-03-07 00:26:29.221048 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:26:29.221058 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:26:29.221069 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:26:29.221080 | orchestrator | 2026-03-07 00:26:29.221091 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-07 00:26:29.221101 | orchestrator | Saturday 07 March 2026 00:25:53 +0000 (0:00:00.131) 0:00:03.873 ******** 2026-03-07 00:26:29.221112 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:26:29.221123 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:26:29.221133 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:26:29.221144 | orchestrator | 2026-03-07 00:26:29.221155 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-07 00:26:29.221173 | orchestrator | Saturday 07 March 2026 00:25:54 +0000 (0:00:01.070) 0:00:04.944 ******** 2026-03-07 00:26:29.221185 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:26:29.221195 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:26:29.221206 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:26:29.221217 | orchestrator | 2026-03-07 00:26:29.221228 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-07 00:26:29.221238 | orchestrator | Saturday 07 March 2026 00:25:54 +0000 (0:00:00.463) 0:00:05.408 ******** 2026-03-07 00:26:29.221249 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:26:29.221260 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:26:29.221271 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:26:29.221322 | orchestrator | 2026-03-07 00:26:29.221334 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-07 00:26:29.221393 | orchestrator | Saturday 07 March 2026 00:25:55 +0000 (0:00:01.081) 0:00:06.490 ******** 2026-03-07 00:26:29.221405 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:26:29.221416 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:26:29.221427 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:26:29.221438 | orchestrator | 2026-03-07 00:26:29.221449 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-03-07 00:26:29.221460 | orchestrator | Saturday 07 March 2026 00:26:12 +0000 (0:00:16.287) 0:00:22.778 ******** 2026-03-07 00:26:29.221471 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:26:29.221481 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:26:29.221493 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:26:29.221503 | orchestrator | 2026-03-07 00:26:29.221514 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-03-07 00:26:29.221544 | orchestrator | Saturday 07 March 2026 00:26:12 +0000 (0:00:00.092) 0:00:22.870 ******** 2026-03-07 00:26:29.221556 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:26:29.221566 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:26:29.221577 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:26:29.221588 | orchestrator | 2026-03-07 00:26:29.221599 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-07 00:26:29.221615 | orchestrator | Saturday 07 March 2026 00:26:20 +0000 (0:00:07.977) 0:00:30.847 ******** 2026-03-07 00:26:29.221626 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:26:29.221637 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:26:29.221648 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:26:29.221659 | orchestrator | 2026-03-07 00:26:29.221670 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-07 00:26:29.221681 | orchestrator | Saturday 07 March 2026 00:26:20 +0000 (0:00:00.521) 0:00:31.368 ******** 2026-03-07 00:26:29.221692 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-03-07 00:26:29.221704 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-03-07 00:26:29.221714 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-03-07 00:26:29.221725 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-03-07 00:26:29.221736 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-03-07 00:26:29.221747 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-03-07 00:26:29.221757 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-03-07 00:26:29.221768 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-03-07 00:26:29.221779 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-03-07 00:26:29.221790 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-03-07 00:26:29.221800 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-03-07 00:26:29.221811 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-03-07 00:26:29.221822 | orchestrator | 2026-03-07 00:26:29.221832 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-07 00:26:29.221851 | orchestrator | Saturday 07 March 2026 00:26:24 +0000 (0:00:03.547) 0:00:34.916 ******** 2026-03-07 00:26:29.221862 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:26:29.221873 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:26:29.221884 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:26:29.221894 | orchestrator | 2026-03-07 00:26:29.221905 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-07 00:26:29.221916 | orchestrator | 2026-03-07 00:26:29.221927 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-07 00:26:29.221938 | orchestrator | Saturday 07 March 2026 00:26:25 +0000 (0:00:01.371) 0:00:36.288 ******** 2026-03-07 00:26:29.221949 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:26:29.221959 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:26:29.221970 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:26:29.221981 | orchestrator | ok: [testbed-manager] 2026-03-07 00:26:29.221992 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:26:29.222002 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:26:29.222013 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:26:29.222080 | orchestrator | 2026-03-07 00:26:29.222091 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:26:29.222103 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:26:29.222115 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:26:29.222128 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:26:29.222139 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:26:29.222150 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:26:29.222161 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:26:29.222172 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:26:29.222183 | orchestrator | 2026-03-07 00:26:29.222194 | orchestrator | 2026-03-07 00:26:29.222205 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:26:29.222216 | orchestrator | Saturday 07 March 2026 00:26:29 +0000 (0:00:03.673) 0:00:39.962 ******** 2026-03-07 00:26:29.222227 | orchestrator | =============================================================================== 2026-03-07 00:26:29.222238 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.29s 2026-03-07 00:26:29.222249 | orchestrator | Install required packages (Debian) -------------------------------------- 7.98s 2026-03-07 00:26:29.222260 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.67s 2026-03-07 00:26:29.222271 | orchestrator | Copy fact files --------------------------------------------------------- 3.55s 2026-03-07 00:26:29.222299 | orchestrator | Create custom facts directory ------------------------------------------- 1.39s 2026-03-07 00:26:29.222310 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.37s 2026-03-07 00:26:29.222328 | orchestrator | Copy fact file ---------------------------------------------------------- 1.22s 2026-03-07 00:26:29.447823 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.08s 2026-03-07 00:26:29.447927 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.07s 2026-03-07 00:26:29.447963 | orchestrator | Create custom facts directory ------------------------------------------- 0.52s 2026-03-07 00:26:29.447976 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.46s 2026-03-07 00:26:29.448010 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.45s 2026-03-07 00:26:29.448022 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.21s 2026-03-07 00:26:29.448033 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.19s 2026-03-07 00:26:29.448045 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.13s 2026-03-07 00:26:29.448057 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.13s 2026-03-07 00:26:29.448068 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.09s 2026-03-07 00:26:29.448079 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.09s 2026-03-07 00:26:29.752401 | orchestrator | + osism apply bootstrap 2026-03-07 00:26:41.868637 | orchestrator | 2026-03-07 00:26:41 | INFO  | Task 4d702512-e500-4ea8-872d-614b662e0672 (bootstrap) was prepared for execution. 2026-03-07 00:26:41.868834 | orchestrator | 2026-03-07 00:26:41 | INFO  | It takes a moment until task 4d702512-e500-4ea8-872d-614b662e0672 (bootstrap) has been started and output is visible here. 2026-03-07 00:26:57.479114 | orchestrator | 2026-03-07 00:26:57.479203 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-03-07 00:26:57.479212 | orchestrator | 2026-03-07 00:26:57.479219 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-03-07 00:26:57.479225 | orchestrator | Saturday 07 March 2026 00:26:45 +0000 (0:00:00.121) 0:00:00.121 ******** 2026-03-07 00:26:57.479231 | orchestrator | ok: [testbed-manager] 2026-03-07 00:26:57.479300 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:26:57.479306 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:26:57.479312 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:26:57.479317 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:26:57.479323 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:26:57.479329 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:26:57.479335 | orchestrator | 2026-03-07 00:26:57.479341 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-07 00:26:57.479346 | orchestrator | 2026-03-07 00:26:57.479356 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-07 00:26:57.479366 | orchestrator | Saturday 07 March 2026 00:26:46 +0000 (0:00:00.176) 0:00:00.298 ******** 2026-03-07 00:26:57.479373 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:26:57.479382 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:26:57.479389 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:26:57.479394 | orchestrator | ok: [testbed-manager] 2026-03-07 00:26:57.479400 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:26:57.479405 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:26:57.479410 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:26:57.479416 | orchestrator | 2026-03-07 00:26:57.479421 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-03-07 00:26:57.479427 | orchestrator | 2026-03-07 00:26:57.479433 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-07 00:26:57.479438 | orchestrator | Saturday 07 March 2026 00:26:49 +0000 (0:00:03.587) 0:00:03.885 ******** 2026-03-07 00:26:57.479445 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-07 00:26:57.479452 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-03-07 00:26:57.479457 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-07 00:26:57.479463 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-03-07 00:26:57.479468 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-07 00:26:57.479474 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-07 00:26:57.479479 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-07 00:26:57.479485 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-07 00:26:57.479490 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-03-07 00:26:57.479513 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-07 00:26:57.479518 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-07 00:26:57.479524 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-03-07 00:26:57.479529 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-07 00:26:57.479535 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-07 00:26:57.479540 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-07 00:26:57.479546 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-07 00:26:57.479551 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-07 00:26:57.479557 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-07 00:26:57.479562 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-07 00:26:57.479567 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-07 00:26:57.479573 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-03-07 00:26:57.479578 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-07 00:26:57.479584 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-07 00:26:57.479589 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-07 00:26:57.479594 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-07 00:26:57.479600 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:26:57.479605 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-07 00:26:57.479611 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-07 00:26:57.479616 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-03-07 00:26:57.479622 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-07 00:26:57.479627 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-07 00:26:57.479633 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-07 00:26:57.479638 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-07 00:26:57.479644 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:26:57.479649 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-07 00:26:57.479655 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-07 00:26:57.479660 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-07 00:26:57.479665 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-07 00:26:57.479671 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-07 00:26:57.479677 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:26:57.479684 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-07 00:26:57.479690 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-07 00:26:57.479697 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-07 00:26:57.479703 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:26:57.479709 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-07 00:26:57.479716 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-07 00:26:57.479723 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:26:57.479742 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-07 00:26:57.479748 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-07 00:26:57.479755 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-07 00:26:57.479761 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-07 00:26:57.479767 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-07 00:26:57.479774 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:26:57.479780 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-07 00:26:57.479786 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-07 00:26:57.479812 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:26:57.479818 | orchestrator | 2026-03-07 00:26:57.479825 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-03-07 00:26:57.479831 | orchestrator | 2026-03-07 00:26:57.479838 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-03-07 00:26:57.479845 | orchestrator | Saturday 07 March 2026 00:26:50 +0000 (0:00:00.446) 0:00:04.332 ******** 2026-03-07 00:26:57.479851 | orchestrator | ok: [testbed-manager] 2026-03-07 00:26:57.479858 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:26:57.479864 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:26:57.479870 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:26:57.479877 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:26:57.479883 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:26:57.479889 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:26:57.479896 | orchestrator | 2026-03-07 00:26:57.479902 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-03-07 00:26:57.479909 | orchestrator | Saturday 07 March 2026 00:26:51 +0000 (0:00:01.251) 0:00:05.583 ******** 2026-03-07 00:26:57.479915 | orchestrator | ok: [testbed-manager] 2026-03-07 00:26:57.479921 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:26:57.479928 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:26:57.479934 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:26:57.479940 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:26:57.479946 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:26:57.479952 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:26:57.479959 | orchestrator | 2026-03-07 00:26:57.479965 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-03-07 00:26:57.479972 | orchestrator | Saturday 07 March 2026 00:26:52 +0000 (0:00:01.229) 0:00:06.812 ******** 2026-03-07 00:26:57.479979 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:26:57.479987 | orchestrator | 2026-03-07 00:26:57.479994 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-03-07 00:26:57.480001 | orchestrator | Saturday 07 March 2026 00:26:52 +0000 (0:00:00.274) 0:00:07.086 ******** 2026-03-07 00:26:57.480007 | orchestrator | changed: [testbed-manager] 2026-03-07 00:26:57.480014 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:26:57.480020 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:26:57.480027 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:26:57.480034 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:26:57.480040 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:26:57.480045 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:26:57.480051 | orchestrator | 2026-03-07 00:26:57.480056 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-03-07 00:26:57.480062 | orchestrator | Saturday 07 March 2026 00:26:54 +0000 (0:00:02.070) 0:00:09.157 ******** 2026-03-07 00:26:57.480067 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:26:57.480074 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:26:57.480081 | orchestrator | 2026-03-07 00:26:57.480087 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-03-07 00:26:57.480092 | orchestrator | Saturday 07 March 2026 00:26:55 +0000 (0:00:00.265) 0:00:09.423 ******** 2026-03-07 00:26:57.480097 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:26:57.480103 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:26:57.480108 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:26:57.480114 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:26:57.480119 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:26:57.480124 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:26:57.480130 | orchestrator | 2026-03-07 00:26:57.480139 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-03-07 00:26:57.480148 | orchestrator | Saturday 07 March 2026 00:26:56 +0000 (0:00:01.033) 0:00:10.457 ******** 2026-03-07 00:26:57.480153 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:26:57.480159 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:26:57.480164 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:26:57.480169 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:26:57.480175 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:26:57.480180 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:26:57.480185 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:26:57.480191 | orchestrator | 2026-03-07 00:26:57.480196 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-03-07 00:26:57.480202 | orchestrator | Saturday 07 March 2026 00:26:56 +0000 (0:00:00.621) 0:00:11.078 ******** 2026-03-07 00:26:57.480207 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:26:57.480212 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:26:57.480218 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:26:57.480224 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:26:57.480253 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:26:57.480259 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:26:57.480265 | orchestrator | ok: [testbed-manager] 2026-03-07 00:26:57.480270 | orchestrator | 2026-03-07 00:26:57.480275 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-07 00:26:57.480282 | orchestrator | Saturday 07 March 2026 00:26:57 +0000 (0:00:00.441) 0:00:11.520 ******** 2026-03-07 00:26:57.480287 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:26:57.480293 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:26:57.480302 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:27:10.391611 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:27:10.391755 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:27:10.391783 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:27:10.391801 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:27:10.391821 | orchestrator | 2026-03-07 00:27:10.391840 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-07 00:27:10.391861 | orchestrator | Saturday 07 March 2026 00:26:57 +0000 (0:00:00.220) 0:00:11.741 ******** 2026-03-07 00:27:10.391882 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:27:10.391916 | orchestrator | 2026-03-07 00:27:10.391929 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-07 00:27:10.391941 | orchestrator | Saturday 07 March 2026 00:26:57 +0000 (0:00:00.310) 0:00:12.051 ******** 2026-03-07 00:27:10.391952 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:27:10.391964 | orchestrator | 2026-03-07 00:27:10.391975 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-07 00:27:10.391986 | orchestrator | Saturday 07 March 2026 00:26:58 +0000 (0:00:00.321) 0:00:12.373 ******** 2026-03-07 00:27:10.391997 | orchestrator | ok: [testbed-manager] 2026-03-07 00:27:10.392009 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:27:10.392020 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:27:10.392030 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:27:10.392041 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:27:10.392052 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:27:10.392063 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:27:10.392074 | orchestrator | 2026-03-07 00:27:10.392085 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-07 00:27:10.392098 | orchestrator | Saturday 07 March 2026 00:26:59 +0000 (0:00:01.636) 0:00:14.009 ******** 2026-03-07 00:27:10.392149 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:27:10.392168 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:27:10.392186 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:27:10.392203 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:27:10.392296 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:27:10.392315 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:27:10.392333 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:27:10.392350 | orchestrator | 2026-03-07 00:27:10.392370 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-07 00:27:10.392387 | orchestrator | Saturday 07 March 2026 00:27:00 +0000 (0:00:00.221) 0:00:14.231 ******** 2026-03-07 00:27:10.392404 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:27:10.392420 | orchestrator | ok: [testbed-manager] 2026-03-07 00:27:10.392438 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:27:10.392456 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:27:10.392475 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:27:10.392495 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:27:10.392513 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:27:10.392530 | orchestrator | 2026-03-07 00:27:10.392550 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-07 00:27:10.392569 | orchestrator | Saturday 07 March 2026 00:27:00 +0000 (0:00:00.680) 0:00:14.911 ******** 2026-03-07 00:27:10.392587 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:27:10.392606 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:27:10.392626 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:27:10.392637 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:27:10.392647 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:27:10.392658 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:27:10.392669 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:27:10.392680 | orchestrator | 2026-03-07 00:27:10.392691 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-07 00:27:10.392704 | orchestrator | Saturday 07 March 2026 00:27:01 +0000 (0:00:00.398) 0:00:15.309 ******** 2026-03-07 00:27:10.392714 | orchestrator | ok: [testbed-manager] 2026-03-07 00:27:10.392725 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:27:10.392736 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:27:10.392747 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:27:10.392757 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:27:10.392768 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:27:10.392779 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:27:10.392796 | orchestrator | 2026-03-07 00:27:10.392828 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-07 00:27:10.392848 | orchestrator | Saturday 07 March 2026 00:27:01 +0000 (0:00:00.631) 0:00:15.941 ******** 2026-03-07 00:27:10.392866 | orchestrator | ok: [testbed-manager] 2026-03-07 00:27:10.392884 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:27:10.392903 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:27:10.392922 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:27:10.392941 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:27:10.392959 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:27:10.392976 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:27:10.392987 | orchestrator | 2026-03-07 00:27:10.392998 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-07 00:27:10.393012 | orchestrator | Saturday 07 March 2026 00:27:02 +0000 (0:00:01.209) 0:00:17.151 ******** 2026-03-07 00:27:10.393030 | orchestrator | ok: [testbed-manager] 2026-03-07 00:27:10.393042 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:27:10.393053 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:27:10.393064 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:27:10.393075 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:27:10.393085 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:27:10.393095 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:27:10.393106 | orchestrator | 2026-03-07 00:27:10.393117 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-07 00:27:10.393143 | orchestrator | Saturday 07 March 2026 00:27:04 +0000 (0:00:01.090) 0:00:18.241 ******** 2026-03-07 00:27:10.393181 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:27:10.393194 | orchestrator | 2026-03-07 00:27:10.393205 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-07 00:27:10.393241 | orchestrator | Saturday 07 March 2026 00:27:04 +0000 (0:00:00.313) 0:00:18.554 ******** 2026-03-07 00:27:10.393252 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:27:10.393263 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:27:10.393274 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:27:10.393284 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:27:10.393295 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:27:10.393306 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:27:10.393317 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:27:10.393327 | orchestrator | 2026-03-07 00:27:10.393338 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-07 00:27:10.393350 | orchestrator | Saturday 07 March 2026 00:27:05 +0000 (0:00:01.394) 0:00:19.948 ******** 2026-03-07 00:27:10.393369 | orchestrator | ok: [testbed-manager] 2026-03-07 00:27:10.393380 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:27:10.393391 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:27:10.393401 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:27:10.393412 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:27:10.393423 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:27:10.393433 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:27:10.393444 | orchestrator | 2026-03-07 00:27:10.393455 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-07 00:27:10.393465 | orchestrator | Saturday 07 March 2026 00:27:05 +0000 (0:00:00.212) 0:00:20.161 ******** 2026-03-07 00:27:10.393476 | orchestrator | ok: [testbed-manager] 2026-03-07 00:27:10.393487 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:27:10.393497 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:27:10.393508 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:27:10.393518 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:27:10.393528 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:27:10.393539 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:27:10.393549 | orchestrator | 2026-03-07 00:27:10.393560 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-07 00:27:10.393571 | orchestrator | Saturday 07 March 2026 00:27:06 +0000 (0:00:00.253) 0:00:20.414 ******** 2026-03-07 00:27:10.393582 | orchestrator | ok: [testbed-manager] 2026-03-07 00:27:10.393592 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:27:10.393603 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:27:10.393613 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:27:10.393624 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:27:10.393634 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:27:10.393645 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:27:10.393655 | orchestrator | 2026-03-07 00:27:10.393666 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-07 00:27:10.393677 | orchestrator | Saturday 07 March 2026 00:27:06 +0000 (0:00:00.226) 0:00:20.640 ******** 2026-03-07 00:27:10.393693 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:27:10.393714 | orchestrator | 2026-03-07 00:27:10.393732 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-07 00:27:10.393750 | orchestrator | Saturday 07 March 2026 00:27:06 +0000 (0:00:00.256) 0:00:20.897 ******** 2026-03-07 00:27:10.393765 | orchestrator | ok: [testbed-manager] 2026-03-07 00:27:10.393782 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:27:10.393814 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:27:10.393835 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:27:10.393853 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:27:10.393870 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:27:10.393889 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:27:10.393900 | orchestrator | 2026-03-07 00:27:10.393911 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-07 00:27:10.393921 | orchestrator | Saturday 07 March 2026 00:27:07 +0000 (0:00:00.553) 0:00:21.450 ******** 2026-03-07 00:27:10.393932 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:27:10.393943 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:27:10.393954 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:27:10.393964 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:27:10.393975 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:27:10.393986 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:27:10.393996 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:27:10.394007 | orchestrator | 2026-03-07 00:27:10.394087 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-07 00:27:10.394102 | orchestrator | Saturday 07 March 2026 00:27:07 +0000 (0:00:00.215) 0:00:21.665 ******** 2026-03-07 00:27:10.394113 | orchestrator | ok: [testbed-manager] 2026-03-07 00:27:10.394124 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:27:10.394134 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:27:10.394145 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:27:10.394156 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:27:10.394169 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:27:10.394188 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:27:10.394199 | orchestrator | 2026-03-07 00:27:10.394210 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-07 00:27:10.394253 | orchestrator | Saturday 07 March 2026 00:27:08 +0000 (0:00:01.160) 0:00:22.825 ******** 2026-03-07 00:27:10.394264 | orchestrator | ok: [testbed-manager] 2026-03-07 00:27:10.394275 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:27:10.394295 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:27:10.394306 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:27:10.394317 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:27:10.394328 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:27:10.394338 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:27:10.394349 | orchestrator | 2026-03-07 00:27:10.394360 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-07 00:27:10.394371 | orchestrator | Saturday 07 March 2026 00:27:09 +0000 (0:00:00.587) 0:00:23.413 ******** 2026-03-07 00:27:10.394382 | orchestrator | ok: [testbed-manager] 2026-03-07 00:27:10.394393 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:27:10.394403 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:27:10.394425 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:27:10.394449 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:27:53.406897 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:27:53.407020 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:27:53.407037 | orchestrator | 2026-03-07 00:27:53.407050 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-07 00:27:53.407063 | orchestrator | Saturday 07 March 2026 00:27:10 +0000 (0:00:01.171) 0:00:24.584 ******** 2026-03-07 00:27:53.407074 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:27:53.407086 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:27:53.407097 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:27:53.407108 | orchestrator | changed: [testbed-manager] 2026-03-07 00:27:53.407119 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:27:53.407131 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:27:53.407142 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:27:53.407206 | orchestrator | 2026-03-07 00:27:53.407218 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-03-07 00:27:53.407230 | orchestrator | Saturday 07 March 2026 00:27:27 +0000 (0:00:17.573) 0:00:42.158 ******** 2026-03-07 00:27:53.407241 | orchestrator | ok: [testbed-manager] 2026-03-07 00:27:53.407277 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:27:53.407288 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:27:53.407299 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:27:53.407310 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:27:53.407321 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:27:53.407332 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:27:53.407342 | orchestrator | 2026-03-07 00:27:53.407353 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-03-07 00:27:53.407365 | orchestrator | Saturday 07 March 2026 00:27:28 +0000 (0:00:00.259) 0:00:42.417 ******** 2026-03-07 00:27:53.407375 | orchestrator | ok: [testbed-manager] 2026-03-07 00:27:53.407386 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:27:53.407397 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:27:53.407408 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:27:53.407418 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:27:53.407430 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:27:53.407442 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:27:53.407455 | orchestrator | 2026-03-07 00:27:53.407468 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-03-07 00:27:53.407480 | orchestrator | Saturday 07 March 2026 00:27:28 +0000 (0:00:00.232) 0:00:42.650 ******** 2026-03-07 00:27:53.407493 | orchestrator | ok: [testbed-manager] 2026-03-07 00:27:53.407506 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:27:53.407518 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:27:53.407530 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:27:53.407543 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:27:53.407555 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:27:53.407568 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:27:53.407581 | orchestrator | 2026-03-07 00:27:53.407594 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-03-07 00:27:53.407607 | orchestrator | Saturday 07 March 2026 00:27:28 +0000 (0:00:00.225) 0:00:42.875 ******** 2026-03-07 00:27:53.407623 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:27:53.407639 | orchestrator | 2026-03-07 00:27:53.407651 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-03-07 00:27:53.407664 | orchestrator | Saturday 07 March 2026 00:27:28 +0000 (0:00:00.276) 0:00:43.152 ******** 2026-03-07 00:27:53.407677 | orchestrator | ok: [testbed-manager] 2026-03-07 00:27:53.407690 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:27:53.407702 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:27:53.407715 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:27:53.407726 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:27:53.407736 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:27:53.407747 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:27:53.407758 | orchestrator | 2026-03-07 00:27:53.407769 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-03-07 00:27:53.407780 | orchestrator | Saturday 07 March 2026 00:27:30 +0000 (0:00:01.966) 0:00:45.119 ******** 2026-03-07 00:27:53.407790 | orchestrator | changed: [testbed-manager] 2026-03-07 00:27:53.407801 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:27:53.407812 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:27:53.407823 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:27:53.407834 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:27:53.407845 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:27:53.407855 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:27:53.407866 | orchestrator | 2026-03-07 00:27:53.407877 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-03-07 00:27:53.407888 | orchestrator | Saturday 07 March 2026 00:27:32 +0000 (0:00:01.159) 0:00:46.278 ******** 2026-03-07 00:27:53.407912 | orchestrator | ok: [testbed-manager] 2026-03-07 00:27:53.407924 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:27:53.407936 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:27:53.407954 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:27:53.407982 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:27:53.408000 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:27:53.408018 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:27:53.408035 | orchestrator | 2026-03-07 00:27:53.408055 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-03-07 00:27:53.408073 | orchestrator | Saturday 07 March 2026 00:27:32 +0000 (0:00:00.811) 0:00:47.089 ******** 2026-03-07 00:27:53.408093 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:27:53.408114 | orchestrator | 2026-03-07 00:27:53.408131 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-03-07 00:27:53.408199 | orchestrator | Saturday 07 March 2026 00:27:33 +0000 (0:00:00.234) 0:00:47.323 ******** 2026-03-07 00:27:53.408219 | orchestrator | changed: [testbed-manager] 2026-03-07 00:27:53.408237 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:27:53.408256 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:27:53.408274 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:27:53.408293 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:27:53.408311 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:27:53.408327 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:27:53.408338 | orchestrator | 2026-03-07 00:27:53.408369 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-03-07 00:27:53.408381 | orchestrator | Saturday 07 March 2026 00:27:34 +0000 (0:00:01.054) 0:00:48.378 ******** 2026-03-07 00:27:53.408391 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:27:53.408402 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:27:53.408413 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:27:53.408424 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:27:53.408434 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:27:53.408445 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:27:53.408455 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:27:53.408466 | orchestrator | 2026-03-07 00:27:53.408477 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-03-07 00:27:53.408487 | orchestrator | Saturday 07 March 2026 00:27:34 +0000 (0:00:00.187) 0:00:48.565 ******** 2026-03-07 00:27:53.408499 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:27:53.408510 | orchestrator | 2026-03-07 00:27:53.408521 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-03-07 00:27:53.408531 | orchestrator | Saturday 07 March 2026 00:27:34 +0000 (0:00:00.261) 0:00:48.827 ******** 2026-03-07 00:27:53.408542 | orchestrator | ok: [testbed-manager] 2026-03-07 00:27:53.408553 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:27:53.408564 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:27:53.408574 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:27:53.408585 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:27:53.408595 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:27:53.408606 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:27:53.408617 | orchestrator | 2026-03-07 00:27:53.408628 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-03-07 00:27:53.408638 | orchestrator | Saturday 07 March 2026 00:27:36 +0000 (0:00:02.081) 0:00:50.909 ******** 2026-03-07 00:27:53.408649 | orchestrator | changed: [testbed-manager] 2026-03-07 00:27:53.408660 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:27:53.408670 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:27:53.408681 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:27:53.408692 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:27:53.408702 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:27:53.408713 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:27:53.408724 | orchestrator | 2026-03-07 00:27:53.408744 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-03-07 00:27:53.408756 | orchestrator | Saturday 07 March 2026 00:27:37 +0000 (0:00:01.100) 0:00:52.009 ******** 2026-03-07 00:27:53.408767 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:27:53.408777 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:27:53.408788 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:27:53.408799 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:27:53.408809 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:27:53.408820 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:27:53.408831 | orchestrator | changed: [testbed-manager] 2026-03-07 00:27:53.408841 | orchestrator | 2026-03-07 00:27:53.408852 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-03-07 00:27:53.408863 | orchestrator | Saturday 07 March 2026 00:27:50 +0000 (0:00:12.844) 0:01:04.853 ******** 2026-03-07 00:27:53.408874 | orchestrator | ok: [testbed-manager] 2026-03-07 00:27:53.408884 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:27:53.408895 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:27:53.408906 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:27:53.408916 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:27:53.408927 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:27:53.408937 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:27:53.408948 | orchestrator | 2026-03-07 00:27:53.408959 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-03-07 00:27:53.408969 | orchestrator | Saturday 07 March 2026 00:27:51 +0000 (0:00:01.024) 0:01:05.878 ******** 2026-03-07 00:27:53.408980 | orchestrator | ok: [testbed-manager] 2026-03-07 00:27:53.408991 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:27:53.409001 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:27:53.409012 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:27:53.409022 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:27:53.409033 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:27:53.409043 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:27:53.409054 | orchestrator | 2026-03-07 00:27:53.409065 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-03-07 00:27:53.409076 | orchestrator | Saturday 07 March 2026 00:27:52 +0000 (0:00:00.940) 0:01:06.818 ******** 2026-03-07 00:27:53.409086 | orchestrator | ok: [testbed-manager] 2026-03-07 00:27:53.409106 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:27:53.409117 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:27:53.409127 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:27:53.409138 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:27:53.409184 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:27:53.409199 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:27:53.409210 | orchestrator | 2026-03-07 00:27:53.409221 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-03-07 00:27:53.409232 | orchestrator | Saturday 07 March 2026 00:27:52 +0000 (0:00:00.262) 0:01:07.081 ******** 2026-03-07 00:27:53.409242 | orchestrator | ok: [testbed-manager] 2026-03-07 00:27:53.409253 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:27:53.409263 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:27:53.409274 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:27:53.409284 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:27:53.409295 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:27:53.409306 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:27:53.409316 | orchestrator | 2026-03-07 00:27:53.409327 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-03-07 00:27:53.409337 | orchestrator | Saturday 07 March 2026 00:27:53 +0000 (0:00:00.237) 0:01:07.319 ******** 2026-03-07 00:27:53.409349 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:27:53.409360 | orchestrator | 2026-03-07 00:27:53.409379 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-03-07 00:30:21.449365 | orchestrator | Saturday 07 March 2026 00:27:53 +0000 (0:00:00.281) 0:01:07.600 ******** 2026-03-07 00:30:21.449484 | orchestrator | ok: [testbed-manager] 2026-03-07 00:30:21.449499 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:30:21.449511 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:30:21.449522 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:30:21.449532 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:30:21.449543 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:30:21.449554 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:30:21.449565 | orchestrator | 2026-03-07 00:30:21.449576 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-03-07 00:30:21.449587 | orchestrator | Saturday 07 March 2026 00:27:55 +0000 (0:00:02.003) 0:01:09.604 ******** 2026-03-07 00:30:21.449598 | orchestrator | changed: [testbed-manager] 2026-03-07 00:30:21.449610 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:30:21.449621 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:30:21.449632 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:30:21.449642 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:30:21.449653 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:30:21.449663 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:30:21.449674 | orchestrator | 2026-03-07 00:30:21.449685 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-03-07 00:30:21.449697 | orchestrator | Saturday 07 March 2026 00:27:55 +0000 (0:00:00.598) 0:01:10.202 ******** 2026-03-07 00:30:21.449708 | orchestrator | ok: [testbed-manager] 2026-03-07 00:30:21.449718 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:30:21.449729 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:30:21.449740 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:30:21.449750 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:30:21.449761 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:30:21.449772 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:30:21.449782 | orchestrator | 2026-03-07 00:30:21.449793 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-03-07 00:30:21.449805 | orchestrator | Saturday 07 March 2026 00:27:56 +0000 (0:00:00.218) 0:01:10.421 ******** 2026-03-07 00:30:21.449816 | orchestrator | ok: [testbed-manager] 2026-03-07 00:30:21.449827 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:30:21.449838 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:30:21.449848 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:30:21.449859 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:30:21.449869 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:30:21.449880 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:30:21.449891 | orchestrator | 2026-03-07 00:30:21.449903 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-03-07 00:30:21.449916 | orchestrator | Saturday 07 March 2026 00:27:57 +0000 (0:00:01.609) 0:01:12.031 ******** 2026-03-07 00:30:21.449954 | orchestrator | changed: [testbed-manager] 2026-03-07 00:30:21.449967 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:30:21.449980 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:30:21.449992 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:30:21.450005 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:30:21.450076 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:30:21.450091 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:30:21.450103 | orchestrator | 2026-03-07 00:30:21.450116 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-03-07 00:30:21.450134 | orchestrator | Saturday 07 March 2026 00:27:59 +0000 (0:00:02.172) 0:01:14.203 ******** 2026-03-07 00:30:21.450147 | orchestrator | ok: [testbed-manager] 2026-03-07 00:30:21.450159 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:30:21.450172 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:30:21.450184 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:30:21.450197 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:30:21.450209 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:30:21.450221 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:30:21.450233 | orchestrator | 2026-03-07 00:30:21.450246 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-03-07 00:30:21.450280 | orchestrator | Saturday 07 March 2026 00:28:02 +0000 (0:00:02.951) 0:01:17.154 ******** 2026-03-07 00:30:21.450292 | orchestrator | ok: [testbed-manager] 2026-03-07 00:30:21.450302 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:30:21.450313 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:30:21.450323 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:30:21.450334 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:30:21.450344 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:30:21.450355 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:30:21.450365 | orchestrator | 2026-03-07 00:30:21.450376 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-03-07 00:30:21.450387 | orchestrator | Saturday 07 March 2026 00:28:36 +0000 (0:00:33.956) 0:01:51.110 ******** 2026-03-07 00:30:21.450398 | orchestrator | changed: [testbed-manager] 2026-03-07 00:30:21.450409 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:30:21.450419 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:30:21.450430 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:30:21.450441 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:30:21.450451 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:30:21.450462 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:30:21.450473 | orchestrator | 2026-03-07 00:30:21.450483 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-03-07 00:30:21.450494 | orchestrator | Saturday 07 March 2026 00:30:04 +0000 (0:01:27.784) 0:03:18.895 ******** 2026-03-07 00:30:21.450505 | orchestrator | ok: [testbed-manager] 2026-03-07 00:30:21.450516 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:30:21.450526 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:30:21.450537 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:30:21.450548 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:30:21.450558 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:30:21.450569 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:30:21.450579 | orchestrator | 2026-03-07 00:30:21.450590 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-03-07 00:30:21.450601 | orchestrator | Saturday 07 March 2026 00:30:06 +0000 (0:00:01.870) 0:03:20.765 ******** 2026-03-07 00:30:21.450611 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:30:21.450622 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:30:21.450633 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:30:21.450643 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:30:21.450654 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:30:21.450664 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:30:21.450675 | orchestrator | changed: [testbed-manager] 2026-03-07 00:30:21.450685 | orchestrator | 2026-03-07 00:30:21.450696 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-03-07 00:30:21.450707 | orchestrator | Saturday 07 March 2026 00:30:20 +0000 (0:00:13.591) 0:03:34.356 ******** 2026-03-07 00:30:21.450754 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-03-07 00:30:21.450789 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-03-07 00:30:21.450804 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-03-07 00:30:21.450827 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-07 00:30:21.450839 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-07 00:30:21.450850 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-03-07 00:30:21.450861 | orchestrator | 2026-03-07 00:30:21.450872 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-03-07 00:30:21.450883 | orchestrator | Saturday 07 March 2026 00:30:20 +0000 (0:00:00.444) 0:03:34.801 ******** 2026-03-07 00:30:21.450894 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-07 00:30:21.450912 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-07 00:30:21.450955 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:30:21.450974 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:30:21.450992 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-07 00:30:21.451019 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-07 00:30:21.451034 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:30:21.451045 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:30:21.451056 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-07 00:30:21.451069 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-07 00:30:21.451088 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-07 00:30:21.451107 | orchestrator | 2026-03-07 00:30:21.451125 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-03-07 00:30:21.451144 | orchestrator | Saturday 07 March 2026 00:30:21 +0000 (0:00:00.753) 0:03:35.554 ******** 2026-03-07 00:30:21.451162 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-07 00:30:21.451174 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-07 00:30:21.451185 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-07 00:30:21.451196 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-07 00:30:21.451207 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-07 00:30:21.451227 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-07 00:30:31.479036 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-07 00:30:31.479122 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-07 00:30:31.479131 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-07 00:30:31.479155 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-07 00:30:31.479162 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-07 00:30:31.479168 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-07 00:30:31.479174 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-07 00:30:31.479180 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-07 00:30:31.479186 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-07 00:30:31.479192 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-07 00:30:31.479198 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-07 00:30:31.479204 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-07 00:30:31.479210 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-07 00:30:31.479215 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-07 00:30:31.479221 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-07 00:30:31.479227 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-07 00:30:31.479233 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-07 00:30:31.479239 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-07 00:30:31.479244 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-07 00:30:31.479250 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:30:31.479258 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-07 00:30:31.479264 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-07 00:30:31.479270 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-07 00:30:31.479275 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-07 00:30:31.479281 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:30:31.479287 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-07 00:30:31.479293 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-07 00:30:31.479299 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-07 00:30:31.479304 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-07 00:30:31.479310 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-07 00:30:31.479316 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-07 00:30:31.479333 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-07 00:30:31.479339 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-07 00:30:31.479345 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-07 00:30:31.479351 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-07 00:30:31.479356 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-07 00:30:31.479367 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:30:31.479373 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:30:31.479379 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-07 00:30:31.479385 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-07 00:30:31.479391 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-07 00:30:31.479396 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-07 00:30:31.479402 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-07 00:30:31.479420 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-07 00:30:31.479426 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-07 00:30:31.479432 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-07 00:30:31.479438 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-07 00:30:31.479444 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-07 00:30:31.479459 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-07 00:30:31.479465 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-07 00:30:31.479471 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-07 00:30:31.479476 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-07 00:30:31.479482 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-07 00:30:31.479488 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-07 00:30:31.479494 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-07 00:30:31.479500 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-07 00:30:31.479505 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-07 00:30:31.479511 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-07 00:30:31.479517 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-07 00:30:31.479523 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-07 00:30:31.479528 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-07 00:30:31.479534 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-07 00:30:31.479540 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-07 00:30:31.479546 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-07 00:30:31.479551 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-07 00:30:31.479557 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-07 00:30:31.479563 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-07 00:30:31.479568 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-07 00:30:31.479575 | orchestrator | 2026-03-07 00:30:31.479582 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-03-07 00:30:31.479594 | orchestrator | Saturday 07 March 2026 00:30:29 +0000 (0:00:07.935) 0:03:43.490 ******** 2026-03-07 00:30:31.479601 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-07 00:30:31.479607 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-07 00:30:31.479614 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-07 00:30:31.479620 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-07 00:30:31.479628 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-07 00:30:31.479638 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-07 00:30:31.479645 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-07 00:30:31.479652 | orchestrator | 2026-03-07 00:30:31.479659 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-03-07 00:30:31.479666 | orchestrator | Saturday 07 March 2026 00:30:29 +0000 (0:00:00.632) 0:03:44.122 ******** 2026-03-07 00:30:31.479672 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-07 00:30:31.479679 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:30:31.479686 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-07 00:30:31.479692 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-07 00:30:31.479699 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:30:31.479707 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:30:31.479713 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-07 00:30:31.479720 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:30:31.479727 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-07 00:30:31.479733 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-07 00:30:31.479744 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-07 00:30:45.896649 | orchestrator | 2026-03-07 00:30:45.896743 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-03-07 00:30:45.896753 | orchestrator | Saturday 07 March 2026 00:30:31 +0000 (0:00:01.548) 0:03:45.671 ******** 2026-03-07 00:30:45.896761 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-07 00:30:45.896771 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-07 00:30:45.896778 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:30:45.896787 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:30:45.896794 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-07 00:30:45.896801 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-07 00:30:45.896808 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:30:45.896814 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:30:45.896820 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-07 00:30:45.896827 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-07 00:30:45.896833 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-07 00:30:45.896840 | orchestrator | 2026-03-07 00:30:45.896846 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-03-07 00:30:45.896868 | orchestrator | Saturday 07 March 2026 00:30:32 +0000 (0:00:00.619) 0:03:46.290 ******** 2026-03-07 00:30:45.896875 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-07 00:30:45.896945 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:30:45.896953 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-07 00:30:45.896960 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:30:45.896967 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-07 00:30:45.896974 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:30:45.896980 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-07 00:30:45.896986 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:30:45.896992 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-07 00:30:45.896999 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-07 00:30:45.897005 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-07 00:30:45.897011 | orchestrator | 2026-03-07 00:30:45.897018 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-03-07 00:30:45.897024 | orchestrator | Saturday 07 March 2026 00:30:33 +0000 (0:00:01.599) 0:03:47.890 ******** 2026-03-07 00:30:45.897030 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:30:45.897037 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:30:45.897043 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:30:45.897049 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:30:45.897055 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:30:45.897061 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:30:45.897068 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:30:45.897075 | orchestrator | 2026-03-07 00:30:45.897081 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-03-07 00:30:45.897089 | orchestrator | Saturday 07 March 2026 00:30:34 +0000 (0:00:00.339) 0:03:48.229 ******** 2026-03-07 00:30:45.897096 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:30:45.897103 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:30:45.897110 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:30:45.897117 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:30:45.897123 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:30:45.897129 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:30:45.897135 | orchestrator | ok: [testbed-manager] 2026-03-07 00:30:45.897141 | orchestrator | 2026-03-07 00:30:45.897147 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-03-07 00:30:45.897153 | orchestrator | Saturday 07 March 2026 00:30:39 +0000 (0:00:05.617) 0:03:53.847 ******** 2026-03-07 00:30:45.897159 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-03-07 00:30:45.897165 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:30:45.897171 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-03-07 00:30:45.897178 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-03-07 00:30:45.897184 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:30:45.897190 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-03-07 00:30:45.897196 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:30:45.897203 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-03-07 00:30:45.897209 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:30:45.897215 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:30:45.897232 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-03-07 00:30:45.897238 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:30:45.897243 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-03-07 00:30:45.897248 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:30:45.897254 | orchestrator | 2026-03-07 00:30:45.897259 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-03-07 00:30:45.897271 | orchestrator | Saturday 07 March 2026 00:30:39 +0000 (0:00:00.302) 0:03:54.149 ******** 2026-03-07 00:30:45.897277 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-03-07 00:30:45.897284 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-03-07 00:30:45.897290 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-03-07 00:30:45.897309 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-03-07 00:30:45.897316 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-03-07 00:30:45.897322 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-03-07 00:30:45.897328 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-03-07 00:30:45.897334 | orchestrator | 2026-03-07 00:30:45.897340 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-03-07 00:30:45.897346 | orchestrator | Saturday 07 March 2026 00:30:41 +0000 (0:00:01.158) 0:03:55.307 ******** 2026-03-07 00:30:45.897353 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:30:45.897361 | orchestrator | 2026-03-07 00:30:45.897367 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-03-07 00:30:45.897373 | orchestrator | Saturday 07 March 2026 00:30:41 +0000 (0:00:00.427) 0:03:55.735 ******** 2026-03-07 00:30:45.897378 | orchestrator | ok: [testbed-manager] 2026-03-07 00:30:45.897384 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:30:45.897391 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:30:45.897397 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:30:45.897403 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:30:45.897409 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:30:45.897416 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:30:45.897422 | orchestrator | 2026-03-07 00:30:45.897428 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-03-07 00:30:45.897435 | orchestrator | Saturday 07 March 2026 00:30:42 +0000 (0:00:01.417) 0:03:57.152 ******** 2026-03-07 00:30:45.897441 | orchestrator | ok: [testbed-manager] 2026-03-07 00:30:45.897447 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:30:45.897454 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:30:45.897460 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:30:45.897466 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:30:45.897472 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:30:45.897478 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:30:45.897483 | orchestrator | 2026-03-07 00:30:45.897489 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-03-07 00:30:45.897495 | orchestrator | Saturday 07 March 2026 00:30:43 +0000 (0:00:00.664) 0:03:57.817 ******** 2026-03-07 00:30:45.897503 | orchestrator | changed: [testbed-manager] 2026-03-07 00:30:45.897508 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:30:45.897515 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:30:45.897522 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:30:45.897528 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:30:45.897535 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:30:45.897541 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:30:45.897548 | orchestrator | 2026-03-07 00:30:45.897555 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-03-07 00:30:45.897561 | orchestrator | Saturday 07 March 2026 00:30:44 +0000 (0:00:00.641) 0:03:58.459 ******** 2026-03-07 00:30:45.897568 | orchestrator | ok: [testbed-manager] 2026-03-07 00:30:45.897574 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:30:45.897581 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:30:45.897587 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:30:45.897594 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:30:45.897600 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:30:45.897607 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:30:45.897614 | orchestrator | 2026-03-07 00:30:45.897621 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-03-07 00:30:45.897632 | orchestrator | Saturday 07 March 2026 00:30:44 +0000 (0:00:00.658) 0:03:59.118 ******** 2026-03-07 00:30:45.897645 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772841877.7626507, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-07 00:30:45.897654 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772841915.1383936, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-07 00:30:45.897661 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772841906.2326527, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-07 00:30:45.897682 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772841905.1066642, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-07 00:30:50.949825 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772841899.0122132, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-07 00:30:50.950013 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772841908.7308023, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-07 00:30:50.950094 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772841910.9668336, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-07 00:30:50.950135 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-07 00:30:50.950162 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-07 00:30:50.950174 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-07 00:30:50.950185 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-07 00:30:50.950225 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-07 00:30:50.950238 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-07 00:30:50.950249 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-07 00:30:50.950269 | orchestrator | 2026-03-07 00:30:50.950283 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-03-07 00:30:50.950295 | orchestrator | Saturday 07 March 2026 00:30:45 +0000 (0:00:00.973) 0:04:00.091 ******** 2026-03-07 00:30:50.950306 | orchestrator | changed: [testbed-manager] 2026-03-07 00:30:50.950318 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:30:50.950329 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:30:50.950340 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:30:50.950350 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:30:50.950362 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:30:50.950373 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:30:50.950384 | orchestrator | 2026-03-07 00:30:50.950395 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-03-07 00:30:50.950405 | orchestrator | Saturday 07 March 2026 00:30:47 +0000 (0:00:01.130) 0:04:01.221 ******** 2026-03-07 00:30:50.950416 | orchestrator | changed: [testbed-manager] 2026-03-07 00:30:50.950427 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:30:50.950437 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:30:50.950448 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:30:50.950458 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:30:50.950469 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:30:50.950480 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:30:50.950491 | orchestrator | 2026-03-07 00:30:50.950507 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-03-07 00:30:50.950518 | orchestrator | Saturday 07 March 2026 00:30:48 +0000 (0:00:01.208) 0:04:02.429 ******** 2026-03-07 00:30:50.950529 | orchestrator | changed: [testbed-manager] 2026-03-07 00:30:50.950540 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:30:50.950550 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:30:50.950561 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:30:50.950572 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:30:50.950582 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:30:50.950593 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:30:50.950603 | orchestrator | 2026-03-07 00:30:50.950614 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-03-07 00:30:50.950625 | orchestrator | Saturday 07 March 2026 00:30:49 +0000 (0:00:01.250) 0:04:03.679 ******** 2026-03-07 00:30:50.950636 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:30:50.950647 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:30:50.950657 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:30:50.950668 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:30:50.950678 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:30:50.950689 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:30:50.950700 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:30:50.950710 | orchestrator | 2026-03-07 00:30:50.950721 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-03-07 00:30:50.950732 | orchestrator | Saturday 07 March 2026 00:30:49 +0000 (0:00:00.289) 0:04:03.969 ******** 2026-03-07 00:30:50.950743 | orchestrator | ok: [testbed-manager] 2026-03-07 00:30:50.950755 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:30:50.950765 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:30:50.950776 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:30:50.950786 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:30:50.950797 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:30:50.950807 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:30:50.950818 | orchestrator | 2026-03-07 00:30:50.950829 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-03-07 00:30:50.950840 | orchestrator | Saturday 07 March 2026 00:30:50 +0000 (0:00:00.747) 0:04:04.716 ******** 2026-03-07 00:30:50.950853 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:30:50.950941 | orchestrator | 2026-03-07 00:30:50.950956 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-03-07 00:30:50.950976 | orchestrator | Saturday 07 March 2026 00:30:50 +0000 (0:00:00.426) 0:04:05.143 ******** 2026-03-07 00:32:10.100961 | orchestrator | ok: [testbed-manager] 2026-03-07 00:32:10.101464 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:32:10.101489 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:32:10.101505 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:32:10.101518 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:32:10.101531 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:32:10.101544 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:32:10.101557 | orchestrator | 2026-03-07 00:32:10.101571 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-03-07 00:32:10.101585 | orchestrator | Saturday 07 March 2026 00:30:59 +0000 (0:00:08.429) 0:04:13.572 ******** 2026-03-07 00:32:10.101598 | orchestrator | ok: [testbed-manager] 2026-03-07 00:32:10.101611 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:32:10.101624 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:32:10.101636 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:32:10.101648 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:32:10.101661 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:32:10.101673 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:32:10.101686 | orchestrator | 2026-03-07 00:32:10.101698 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-03-07 00:32:10.101711 | orchestrator | Saturday 07 March 2026 00:31:00 +0000 (0:00:01.399) 0:04:14.972 ******** 2026-03-07 00:32:10.101724 | orchestrator | ok: [testbed-manager] 2026-03-07 00:32:10.101779 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:32:10.101791 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:32:10.101804 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:32:10.101815 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:32:10.101828 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:32:10.101841 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:32:10.101853 | orchestrator | 2026-03-07 00:32:10.101864 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-03-07 00:32:10.101875 | orchestrator | Saturday 07 March 2026 00:31:02 +0000 (0:00:01.460) 0:04:16.432 ******** 2026-03-07 00:32:10.101886 | orchestrator | ok: [testbed-manager] 2026-03-07 00:32:10.101897 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:32:10.101908 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:32:10.101918 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:32:10.101929 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:32:10.101940 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:32:10.101951 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:32:10.101962 | orchestrator | 2026-03-07 00:32:10.101973 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-03-07 00:32:10.101985 | orchestrator | Saturday 07 March 2026 00:31:02 +0000 (0:00:00.341) 0:04:16.773 ******** 2026-03-07 00:32:10.101996 | orchestrator | ok: [testbed-manager] 2026-03-07 00:32:10.102006 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:32:10.102084 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:32:10.102096 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:32:10.102107 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:32:10.102118 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:32:10.102129 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:32:10.102174 | orchestrator | 2026-03-07 00:32:10.102186 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-03-07 00:32:10.102197 | orchestrator | Saturday 07 March 2026 00:31:02 +0000 (0:00:00.343) 0:04:17.117 ******** 2026-03-07 00:32:10.102208 | orchestrator | ok: [testbed-manager] 2026-03-07 00:32:10.102219 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:32:10.102229 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:32:10.102240 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:32:10.102276 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:32:10.102288 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:32:10.102298 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:32:10.102309 | orchestrator | 2026-03-07 00:32:10.102320 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-03-07 00:32:10.102331 | orchestrator | Saturday 07 March 2026 00:31:03 +0000 (0:00:00.302) 0:04:17.419 ******** 2026-03-07 00:32:10.102342 | orchestrator | ok: [testbed-manager] 2026-03-07 00:32:10.102352 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:32:10.102363 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:32:10.102374 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:32:10.102384 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:32:10.102395 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:32:10.102406 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:32:10.102416 | orchestrator | 2026-03-07 00:32:10.102427 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-03-07 00:32:10.102438 | orchestrator | Saturday 07 March 2026 00:31:09 +0000 (0:00:06.548) 0:04:23.967 ******** 2026-03-07 00:32:10.102451 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:32:10.102465 | orchestrator | 2026-03-07 00:32:10.102476 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-03-07 00:32:10.102487 | orchestrator | Saturday 07 March 2026 00:31:10 +0000 (0:00:00.414) 0:04:24.382 ******** 2026-03-07 00:32:10.102498 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-03-07 00:32:10.102508 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-03-07 00:32:10.102519 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-03-07 00:32:10.102530 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-03-07 00:32:10.102541 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:32:10.102569 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-03-07 00:32:10.102581 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-03-07 00:32:10.102592 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:32:10.102603 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-03-07 00:32:10.102614 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-03-07 00:32:10.102624 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:32:10.102635 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-03-07 00:32:10.102646 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-03-07 00:32:10.102657 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:32:10.102668 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-03-07 00:32:10.102679 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-03-07 00:32:10.102712 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:32:10.102724 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:32:10.102753 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-03-07 00:32:10.102765 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-03-07 00:32:10.102775 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:32:10.102786 | orchestrator | 2026-03-07 00:32:10.102797 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-03-07 00:32:10.102808 | orchestrator | Saturday 07 March 2026 00:31:10 +0000 (0:00:00.394) 0:04:24.777 ******** 2026-03-07 00:32:10.102819 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:32:10.102830 | orchestrator | 2026-03-07 00:32:10.102841 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-03-07 00:32:10.102852 | orchestrator | Saturday 07 March 2026 00:31:11 +0000 (0:00:00.440) 0:04:25.217 ******** 2026-03-07 00:32:10.102872 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-03-07 00:32:10.102883 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:32:10.102894 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-03-07 00:32:10.102905 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-03-07 00:32:10.102916 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:32:10.102927 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-03-07 00:32:10.102937 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:32:10.102948 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-03-07 00:32:10.102959 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:32:10.102970 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-03-07 00:32:10.102980 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:32:10.102991 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:32:10.103002 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-03-07 00:32:10.103012 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:32:10.103023 | orchestrator | 2026-03-07 00:32:10.103034 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-03-07 00:32:10.103045 | orchestrator | Saturday 07 March 2026 00:31:11 +0000 (0:00:00.320) 0:04:25.537 ******** 2026-03-07 00:32:10.103056 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:32:10.103067 | orchestrator | 2026-03-07 00:32:10.103078 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-03-07 00:32:10.103089 | orchestrator | Saturday 07 March 2026 00:31:11 +0000 (0:00:00.467) 0:04:26.004 ******** 2026-03-07 00:32:10.103100 | orchestrator | changed: [testbed-manager] 2026-03-07 00:32:10.103111 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:32:10.103122 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:32:10.103132 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:32:10.103143 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:32:10.103159 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:32:10.103171 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:32:10.103182 | orchestrator | 2026-03-07 00:32:10.103192 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-03-07 00:32:10.103203 | orchestrator | Saturday 07 March 2026 00:31:45 +0000 (0:00:33.882) 0:04:59.887 ******** 2026-03-07 00:32:10.103214 | orchestrator | changed: [testbed-manager] 2026-03-07 00:32:10.103225 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:32:10.103235 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:32:10.103246 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:32:10.103257 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:32:10.103267 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:32:10.103278 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:32:10.103289 | orchestrator | 2026-03-07 00:32:10.103299 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-03-07 00:32:10.103310 | orchestrator | Saturday 07 March 2026 00:31:54 +0000 (0:00:08.384) 0:05:08.272 ******** 2026-03-07 00:32:10.103321 | orchestrator | changed: [testbed-manager] 2026-03-07 00:32:10.103332 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:32:10.103342 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:32:10.103353 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:32:10.103364 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:32:10.103374 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:32:10.103385 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:32:10.103395 | orchestrator | 2026-03-07 00:32:10.103406 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-03-07 00:32:10.103417 | orchestrator | Saturday 07 March 2026 00:32:01 +0000 (0:00:07.618) 0:05:15.891 ******** 2026-03-07 00:32:10.103434 | orchestrator | ok: [testbed-manager] 2026-03-07 00:32:10.103446 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:32:10.103456 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:32:10.103467 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:32:10.103478 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:32:10.103489 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:32:10.103499 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:32:10.103510 | orchestrator | 2026-03-07 00:32:10.103521 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-03-07 00:32:10.103532 | orchestrator | Saturday 07 March 2026 00:32:03 +0000 (0:00:02.095) 0:05:17.987 ******** 2026-03-07 00:32:10.103543 | orchestrator | changed: [testbed-manager] 2026-03-07 00:32:10.103554 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:32:10.103565 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:32:10.103575 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:32:10.103586 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:32:10.103597 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:32:10.103608 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:32:10.103618 | orchestrator | 2026-03-07 00:32:10.103637 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-03-07 00:32:21.331223 | orchestrator | Saturday 07 March 2026 00:32:10 +0000 (0:00:06.304) 0:05:24.292 ******** 2026-03-07 00:32:21.331360 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:32:21.331389 | orchestrator | 2026-03-07 00:32:21.331409 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-03-07 00:32:21.331427 | orchestrator | Saturday 07 March 2026 00:32:10 +0000 (0:00:00.426) 0:05:24.718 ******** 2026-03-07 00:32:21.331439 | orchestrator | changed: [testbed-manager] 2026-03-07 00:32:21.331451 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:32:21.331462 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:32:21.331474 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:32:21.331485 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:32:21.331495 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:32:21.331506 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:32:21.331517 | orchestrator | 2026-03-07 00:32:21.331528 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-03-07 00:32:21.331540 | orchestrator | Saturday 07 March 2026 00:32:11 +0000 (0:00:00.718) 0:05:25.437 ******** 2026-03-07 00:32:21.331550 | orchestrator | ok: [testbed-manager] 2026-03-07 00:32:21.331562 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:32:21.331573 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:32:21.331584 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:32:21.331595 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:32:21.331605 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:32:21.331616 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:32:21.331627 | orchestrator | 2026-03-07 00:32:21.331637 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-03-07 00:32:21.331648 | orchestrator | Saturday 07 March 2026 00:32:13 +0000 (0:00:01.810) 0:05:27.248 ******** 2026-03-07 00:32:21.331659 | orchestrator | changed: [testbed-manager] 2026-03-07 00:32:21.331670 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:32:21.331681 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:32:21.331691 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:32:21.331702 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:32:21.331766 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:32:21.331780 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:32:21.331794 | orchestrator | 2026-03-07 00:32:21.331806 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-03-07 00:32:21.331819 | orchestrator | Saturday 07 March 2026 00:32:13 +0000 (0:00:00.829) 0:05:28.077 ******** 2026-03-07 00:32:21.331855 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:32:21.331868 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:32:21.331881 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:32:21.331893 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:32:21.331906 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:32:21.331919 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:32:21.331937 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:32:21.331955 | orchestrator | 2026-03-07 00:32:21.331974 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-03-07 00:32:21.331993 | orchestrator | Saturday 07 March 2026 00:32:14 +0000 (0:00:00.290) 0:05:28.368 ******** 2026-03-07 00:32:21.332011 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:32:21.332029 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:32:21.332047 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:32:21.332063 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:32:21.332101 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:32:21.332122 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:32:21.332142 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:32:21.332160 | orchestrator | 2026-03-07 00:32:21.332179 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-03-07 00:32:21.332190 | orchestrator | Saturday 07 March 2026 00:32:14 +0000 (0:00:00.400) 0:05:28.768 ******** 2026-03-07 00:32:21.332201 | orchestrator | ok: [testbed-manager] 2026-03-07 00:32:21.332211 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:32:21.332222 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:32:21.332232 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:32:21.332243 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:32:21.332254 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:32:21.332264 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:32:21.332275 | orchestrator | 2026-03-07 00:32:21.332285 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-03-07 00:32:21.332296 | orchestrator | Saturday 07 March 2026 00:32:14 +0000 (0:00:00.292) 0:05:29.061 ******** 2026-03-07 00:32:21.332307 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:32:21.332317 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:32:21.332332 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:32:21.332350 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:32:21.332367 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:32:21.332386 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:32:21.332406 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:32:21.332418 | orchestrator | 2026-03-07 00:32:21.332429 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-03-07 00:32:21.332440 | orchestrator | Saturday 07 March 2026 00:32:15 +0000 (0:00:00.308) 0:05:29.370 ******** 2026-03-07 00:32:21.332451 | orchestrator | ok: [testbed-manager] 2026-03-07 00:32:21.332461 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:32:21.332472 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:32:21.332483 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:32:21.332493 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:32:21.332504 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:32:21.332514 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:32:21.332525 | orchestrator | 2026-03-07 00:32:21.332536 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-03-07 00:32:21.332547 | orchestrator | Saturday 07 March 2026 00:32:15 +0000 (0:00:00.317) 0:05:29.687 ******** 2026-03-07 00:32:21.332557 | orchestrator | ok: [testbed-manager] =>  2026-03-07 00:32:21.332568 | orchestrator |  docker_version: 5:27.5.1 2026-03-07 00:32:21.332579 | orchestrator | ok: [testbed-node-3] =>  2026-03-07 00:32:21.332589 | orchestrator |  docker_version: 5:27.5.1 2026-03-07 00:32:21.332600 | orchestrator | ok: [testbed-node-4] =>  2026-03-07 00:32:21.332610 | orchestrator |  docker_version: 5:27.5.1 2026-03-07 00:32:21.332621 | orchestrator | ok: [testbed-node-5] =>  2026-03-07 00:32:21.332631 | orchestrator |  docker_version: 5:27.5.1 2026-03-07 00:32:21.332662 | orchestrator | ok: [testbed-node-0] =>  2026-03-07 00:32:21.332685 | orchestrator |  docker_version: 5:27.5.1 2026-03-07 00:32:21.332696 | orchestrator | ok: [testbed-node-1] =>  2026-03-07 00:32:21.332735 | orchestrator |  docker_version: 5:27.5.1 2026-03-07 00:32:21.332752 | orchestrator | ok: [testbed-node-2] =>  2026-03-07 00:32:21.332763 | orchestrator |  docker_version: 5:27.5.1 2026-03-07 00:32:21.332774 | orchestrator | 2026-03-07 00:32:21.332784 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-03-07 00:32:21.332795 | orchestrator | Saturday 07 March 2026 00:32:15 +0000 (0:00:00.296) 0:05:29.984 ******** 2026-03-07 00:32:21.332806 | orchestrator | ok: [testbed-manager] =>  2026-03-07 00:32:21.332816 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-07 00:32:21.332827 | orchestrator | ok: [testbed-node-3] =>  2026-03-07 00:32:21.332837 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-07 00:32:21.332848 | orchestrator | ok: [testbed-node-4] =>  2026-03-07 00:32:21.332858 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-07 00:32:21.332869 | orchestrator | ok: [testbed-node-5] =>  2026-03-07 00:32:21.332879 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-07 00:32:21.332890 | orchestrator | ok: [testbed-node-0] =>  2026-03-07 00:32:21.332900 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-07 00:32:21.332911 | orchestrator | ok: [testbed-node-1] =>  2026-03-07 00:32:21.332921 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-07 00:32:21.332932 | orchestrator | ok: [testbed-node-2] =>  2026-03-07 00:32:21.332942 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-07 00:32:21.332958 | orchestrator | 2026-03-07 00:32:21.332974 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-03-07 00:32:21.332985 | orchestrator | Saturday 07 March 2026 00:32:16 +0000 (0:00:00.283) 0:05:30.267 ******** 2026-03-07 00:32:21.332995 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:32:21.333006 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:32:21.333017 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:32:21.333027 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:32:21.333038 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:32:21.333048 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:32:21.333059 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:32:21.333070 | orchestrator | 2026-03-07 00:32:21.333080 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-03-07 00:32:21.333091 | orchestrator | Saturday 07 March 2026 00:32:16 +0000 (0:00:00.252) 0:05:30.519 ******** 2026-03-07 00:32:21.333102 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:32:21.333112 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:32:21.333123 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:32:21.333140 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:32:21.333158 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:32:21.333177 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:32:21.333194 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:32:21.333212 | orchestrator | 2026-03-07 00:32:21.333229 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-03-07 00:32:21.333246 | orchestrator | Saturday 07 March 2026 00:32:16 +0000 (0:00:00.262) 0:05:30.782 ******** 2026-03-07 00:32:21.333268 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:32:21.333290 | orchestrator | 2026-03-07 00:32:21.333309 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-03-07 00:32:21.333338 | orchestrator | Saturday 07 March 2026 00:32:16 +0000 (0:00:00.420) 0:05:31.203 ******** 2026-03-07 00:32:21.333357 | orchestrator | ok: [testbed-manager] 2026-03-07 00:32:21.333376 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:32:21.333394 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:32:21.333413 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:32:21.333431 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:32:21.333442 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:32:21.333462 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:32:21.333472 | orchestrator | 2026-03-07 00:32:21.333483 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-03-07 00:32:21.333494 | orchestrator | Saturday 07 March 2026 00:32:18 +0000 (0:00:01.002) 0:05:32.205 ******** 2026-03-07 00:32:21.333505 | orchestrator | ok: [testbed-manager] 2026-03-07 00:32:21.333516 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:32:21.333526 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:32:21.333537 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:32:21.333547 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:32:21.333558 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:32:21.333569 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:32:21.333579 | orchestrator | 2026-03-07 00:32:21.333590 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-03-07 00:32:21.333602 | orchestrator | Saturday 07 March 2026 00:32:20 +0000 (0:00:02.960) 0:05:35.165 ******** 2026-03-07 00:32:21.333613 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-03-07 00:32:21.333625 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-03-07 00:32:21.333636 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-03-07 00:32:21.333646 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-03-07 00:32:21.333657 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-03-07 00:32:21.333668 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-03-07 00:32:21.333678 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:32:21.333689 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-03-07 00:32:21.333700 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-03-07 00:32:21.333792 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-03-07 00:32:21.333804 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:32:21.333815 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-03-07 00:32:21.333826 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-03-07 00:32:21.333836 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-03-07 00:32:21.333847 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:32:21.333858 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-03-07 00:32:21.333881 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-03-07 00:33:23.525727 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:33:23.525840 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-03-07 00:33:23.525856 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-03-07 00:33:23.525868 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-03-07 00:33:23.525879 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-03-07 00:33:23.525890 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:33:23.525901 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:33:23.525912 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-03-07 00:33:23.525923 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-03-07 00:33:23.525934 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-03-07 00:33:23.525944 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:33:23.525956 | orchestrator | 2026-03-07 00:33:23.525968 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-03-07 00:33:23.525980 | orchestrator | Saturday 07 March 2026 00:32:21 +0000 (0:00:00.580) 0:05:35.746 ******** 2026-03-07 00:33:23.525991 | orchestrator | ok: [testbed-manager] 2026-03-07 00:33:23.526002 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:33:23.526013 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:33:23.526082 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:33:23.526093 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:33:23.526105 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:33:23.526117 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:33:23.526152 | orchestrator | 2026-03-07 00:33:23.526163 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-03-07 00:33:23.526174 | orchestrator | Saturday 07 March 2026 00:32:28 +0000 (0:00:06.881) 0:05:42.628 ******** 2026-03-07 00:33:23.526185 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:33:23.526196 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:33:23.526207 | orchestrator | ok: [testbed-manager] 2026-03-07 00:33:23.526217 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:33:23.526229 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:33:23.526241 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:33:23.526253 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:33:23.526266 | orchestrator | 2026-03-07 00:33:23.526278 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-03-07 00:33:23.526291 | orchestrator | Saturday 07 March 2026 00:32:29 +0000 (0:00:01.077) 0:05:43.706 ******** 2026-03-07 00:33:23.526303 | orchestrator | ok: [testbed-manager] 2026-03-07 00:33:23.526316 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:33:23.526329 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:33:23.526342 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:33:23.526354 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:33:23.526366 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:33:23.526379 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:33:23.526393 | orchestrator | 2026-03-07 00:33:23.526405 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-03-07 00:33:23.526416 | orchestrator | Saturday 07 March 2026 00:32:37 +0000 (0:00:08.074) 0:05:51.780 ******** 2026-03-07 00:33:23.526427 | orchestrator | changed: [testbed-manager] 2026-03-07 00:33:23.526437 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:33:23.526448 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:33:23.526459 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:33:23.526470 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:33:23.526480 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:33:23.526491 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:33:23.526502 | orchestrator | 2026-03-07 00:33:23.526513 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-03-07 00:33:23.526524 | orchestrator | Saturday 07 March 2026 00:32:41 +0000 (0:00:03.527) 0:05:55.308 ******** 2026-03-07 00:33:23.526534 | orchestrator | ok: [testbed-manager] 2026-03-07 00:33:23.526545 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:33:23.526556 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:33:23.526567 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:33:23.526578 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:33:23.526608 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:33:23.526619 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:33:23.526630 | orchestrator | 2026-03-07 00:33:23.526641 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-03-07 00:33:23.526652 | orchestrator | Saturday 07 March 2026 00:32:42 +0000 (0:00:01.307) 0:05:56.616 ******** 2026-03-07 00:33:23.526662 | orchestrator | ok: [testbed-manager] 2026-03-07 00:33:23.526673 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:33:23.526684 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:33:23.526694 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:33:23.526705 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:33:23.526715 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:33:23.526726 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:33:23.526737 | orchestrator | 2026-03-07 00:33:23.526748 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-03-07 00:33:23.526759 | orchestrator | Saturday 07 March 2026 00:32:43 +0000 (0:00:01.529) 0:05:58.145 ******** 2026-03-07 00:33:23.526770 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:33:23.526780 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:33:23.526791 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:33:23.526802 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:33:23.526813 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:33:23.526831 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:33:23.526842 | orchestrator | changed: [testbed-manager] 2026-03-07 00:33:23.526853 | orchestrator | 2026-03-07 00:33:23.526863 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-03-07 00:33:23.526874 | orchestrator | Saturday 07 March 2026 00:32:44 +0000 (0:00:00.694) 0:05:58.840 ******** 2026-03-07 00:33:23.526885 | orchestrator | ok: [testbed-manager] 2026-03-07 00:33:23.526896 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:33:23.526906 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:33:23.526917 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:33:23.526928 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:33:23.526938 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:33:23.526949 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:33:23.526960 | orchestrator | 2026-03-07 00:33:23.526971 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-03-07 00:33:23.526999 | orchestrator | Saturday 07 March 2026 00:32:54 +0000 (0:00:09.964) 0:06:08.804 ******** 2026-03-07 00:33:23.527011 | orchestrator | changed: [testbed-manager] 2026-03-07 00:33:23.527022 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:33:23.527033 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:33:23.527044 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:33:23.527054 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:33:23.527065 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:33:23.527076 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:33:23.527086 | orchestrator | 2026-03-07 00:33:23.527097 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-03-07 00:33:23.527108 | orchestrator | Saturday 07 March 2026 00:32:55 +0000 (0:00:01.063) 0:06:09.868 ******** 2026-03-07 00:33:23.527119 | orchestrator | ok: [testbed-manager] 2026-03-07 00:33:23.527130 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:33:23.527141 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:33:23.527152 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:33:23.527163 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:33:23.527173 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:33:23.527184 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:33:23.527195 | orchestrator | 2026-03-07 00:33:23.527205 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-03-07 00:33:23.527216 | orchestrator | Saturday 07 March 2026 00:33:04 +0000 (0:00:09.333) 0:06:19.201 ******** 2026-03-07 00:33:23.527227 | orchestrator | ok: [testbed-manager] 2026-03-07 00:33:23.527238 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:33:23.527248 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:33:23.527259 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:33:23.527270 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:33:23.527281 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:33:23.527291 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:33:23.527302 | orchestrator | 2026-03-07 00:33:23.527313 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-03-07 00:33:23.527324 | orchestrator | Saturday 07 March 2026 00:33:16 +0000 (0:00:11.388) 0:06:30.590 ******** 2026-03-07 00:33:23.527335 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-03-07 00:33:23.527346 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-03-07 00:33:23.527357 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-03-07 00:33:23.527368 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-03-07 00:33:23.527379 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-03-07 00:33:23.527389 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-03-07 00:33:23.527400 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-03-07 00:33:23.527411 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-03-07 00:33:23.527422 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-03-07 00:33:23.527432 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-03-07 00:33:23.527449 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-03-07 00:33:23.527503 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-03-07 00:33:23.527515 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-03-07 00:33:23.527526 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-03-07 00:33:23.527536 | orchestrator | 2026-03-07 00:33:23.527547 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-03-07 00:33:23.527558 | orchestrator | Saturday 07 March 2026 00:33:17 +0000 (0:00:01.271) 0:06:31.862 ******** 2026-03-07 00:33:23.527569 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:33:23.527585 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:33:23.527660 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:33:23.527672 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:33:23.527682 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:33:23.527693 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:33:23.527704 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:33:23.527714 | orchestrator | 2026-03-07 00:33:23.527726 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-03-07 00:33:23.527736 | orchestrator | Saturday 07 March 2026 00:33:18 +0000 (0:00:00.530) 0:06:32.392 ******** 2026-03-07 00:33:23.527747 | orchestrator | ok: [testbed-manager] 2026-03-07 00:33:23.527758 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:33:23.527769 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:33:23.527779 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:33:23.527790 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:33:23.527801 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:33:23.527811 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:33:23.527822 | orchestrator | 2026-03-07 00:33:23.527833 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-03-07 00:33:23.527845 | orchestrator | Saturday 07 March 2026 00:33:22 +0000 (0:00:04.285) 0:06:36.678 ******** 2026-03-07 00:33:23.527856 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:33:23.527867 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:33:23.527878 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:33:23.527888 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:33:23.527899 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:33:23.527909 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:33:23.527920 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:33:23.527931 | orchestrator | 2026-03-07 00:33:23.527942 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-03-07 00:33:23.527953 | orchestrator | Saturday 07 March 2026 00:33:23 +0000 (0:00:00.532) 0:06:37.211 ******** 2026-03-07 00:33:23.527964 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-03-07 00:33:23.527976 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-03-07 00:33:23.527986 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:33:23.527997 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-03-07 00:33:23.528008 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-03-07 00:33:23.528019 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:33:23.528029 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-03-07 00:33:23.528040 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-03-07 00:33:23.528051 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:33:23.528070 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-03-07 00:33:43.792668 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-03-07 00:33:43.792789 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:33:43.792805 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-03-07 00:33:43.792817 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-03-07 00:33:43.792828 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:33:43.792864 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-03-07 00:33:43.792877 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-03-07 00:33:43.792888 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:33:43.792899 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-03-07 00:33:43.792910 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-03-07 00:33:43.792921 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:33:43.792932 | orchestrator | 2026-03-07 00:33:43.792945 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-03-07 00:33:43.792957 | orchestrator | Saturday 07 March 2026 00:33:23 +0000 (0:00:00.786) 0:06:37.997 ******** 2026-03-07 00:33:43.792968 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:33:43.792979 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:33:43.792989 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:33:43.793000 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:33:43.793011 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:33:43.793021 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:33:43.793048 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:33:43.793060 | orchestrator | 2026-03-07 00:33:43.793083 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-03-07 00:33:43.793094 | orchestrator | Saturday 07 March 2026 00:33:24 +0000 (0:00:00.538) 0:06:38.536 ******** 2026-03-07 00:33:43.793105 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:33:43.793116 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:33:43.793127 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:33:43.793137 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:33:43.793148 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:33:43.793159 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:33:43.793169 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:33:43.793180 | orchestrator | 2026-03-07 00:33:43.793191 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-03-07 00:33:43.793202 | orchestrator | Saturday 07 March 2026 00:33:24 +0000 (0:00:00.500) 0:06:39.037 ******** 2026-03-07 00:33:43.793213 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:33:43.793224 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:33:43.793235 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:33:43.793245 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:33:43.793256 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:33:43.793266 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:33:43.793277 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:33:43.793288 | orchestrator | 2026-03-07 00:33:43.793300 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-03-07 00:33:43.793320 | orchestrator | Saturday 07 March 2026 00:33:25 +0000 (0:00:00.508) 0:06:39.546 ******** 2026-03-07 00:33:43.793339 | orchestrator | ok: [testbed-manager] 2026-03-07 00:33:43.793357 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:33:43.793376 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:33:43.793393 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:33:43.793412 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:33:43.793431 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:33:43.793451 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:33:43.793470 | orchestrator | 2026-03-07 00:33:43.793490 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-03-07 00:33:43.793509 | orchestrator | Saturday 07 March 2026 00:33:27 +0000 (0:00:02.050) 0:06:41.597 ******** 2026-03-07 00:33:43.793529 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:33:43.793551 | orchestrator | 2026-03-07 00:33:43.793602 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-03-07 00:33:43.793620 | orchestrator | Saturday 07 March 2026 00:33:28 +0000 (0:00:00.876) 0:06:42.473 ******** 2026-03-07 00:33:43.793660 | orchestrator | ok: [testbed-manager] 2026-03-07 00:33:43.793680 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:33:43.793700 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:33:43.793719 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:33:43.793738 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:33:43.793757 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:33:43.793768 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:33:43.793793 | orchestrator | 2026-03-07 00:33:43.793804 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-03-07 00:33:43.793815 | orchestrator | Saturday 07 March 2026 00:33:29 +0000 (0:00:00.893) 0:06:43.367 ******** 2026-03-07 00:33:43.793826 | orchestrator | ok: [testbed-manager] 2026-03-07 00:33:43.793837 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:33:43.793847 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:33:43.793858 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:33:43.793869 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:33:43.793879 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:33:43.793890 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:33:43.793901 | orchestrator | 2026-03-07 00:33:43.793912 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-03-07 00:33:43.793923 | orchestrator | Saturday 07 March 2026 00:33:30 +0000 (0:00:00.868) 0:06:44.235 ******** 2026-03-07 00:33:43.793933 | orchestrator | ok: [testbed-manager] 2026-03-07 00:33:43.793944 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:33:43.793955 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:33:43.793966 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:33:43.793976 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:33:43.793987 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:33:43.793997 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:33:43.794008 | orchestrator | 2026-03-07 00:33:43.794108 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-03-07 00:33:43.794157 | orchestrator | Saturday 07 March 2026 00:33:31 +0000 (0:00:01.607) 0:06:45.843 ******** 2026-03-07 00:33:43.794179 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:33:43.794198 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:33:43.794218 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:33:43.794238 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:33:43.794258 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:33:43.794276 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:33:43.794294 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:33:43.794311 | orchestrator | 2026-03-07 00:33:43.794328 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-03-07 00:33:43.794344 | orchestrator | Saturday 07 March 2026 00:33:33 +0000 (0:00:01.578) 0:06:47.422 ******** 2026-03-07 00:33:43.794360 | orchestrator | ok: [testbed-manager] 2026-03-07 00:33:43.794377 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:33:43.794394 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:33:43.794411 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:33:43.794429 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:33:43.794447 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:33:43.794464 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:33:43.794482 | orchestrator | 2026-03-07 00:33:43.794500 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-03-07 00:33:43.794517 | orchestrator | Saturday 07 March 2026 00:33:34 +0000 (0:00:01.309) 0:06:48.731 ******** 2026-03-07 00:33:43.794533 | orchestrator | changed: [testbed-manager] 2026-03-07 00:33:43.794550 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:33:43.794597 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:33:43.794616 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:33:43.794635 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:33:43.794651 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:33:43.794669 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:33:43.794687 | orchestrator | 2026-03-07 00:33:43.794706 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-03-07 00:33:43.794740 | orchestrator | Saturday 07 March 2026 00:33:35 +0000 (0:00:01.408) 0:06:50.140 ******** 2026-03-07 00:33:43.794759 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:33:43.794778 | orchestrator | 2026-03-07 00:33:43.794798 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-03-07 00:33:43.794816 | orchestrator | Saturday 07 March 2026 00:33:36 +0000 (0:00:01.006) 0:06:51.146 ******** 2026-03-07 00:33:43.794834 | orchestrator | ok: [testbed-manager] 2026-03-07 00:33:43.794852 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:33:43.794869 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:33:43.794888 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:33:43.794907 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:33:43.794926 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:33:43.794943 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:33:43.794959 | orchestrator | 2026-03-07 00:33:43.794970 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-03-07 00:33:43.794981 | orchestrator | Saturday 07 March 2026 00:33:38 +0000 (0:00:01.355) 0:06:52.502 ******** 2026-03-07 00:33:43.794992 | orchestrator | ok: [testbed-manager] 2026-03-07 00:33:43.795003 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:33:43.795013 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:33:43.795024 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:33:43.795034 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:33:43.795045 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:33:43.795071 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:33:43.795082 | orchestrator | 2026-03-07 00:33:43.795093 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-03-07 00:33:43.795106 | orchestrator | Saturday 07 March 2026 00:33:39 +0000 (0:00:01.171) 0:06:53.674 ******** 2026-03-07 00:33:43.795125 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:33:43.795143 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:33:43.795161 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:33:43.795179 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:33:43.795199 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:33:43.795217 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:33:43.795235 | orchestrator | ok: [testbed-manager] 2026-03-07 00:33:43.795254 | orchestrator | 2026-03-07 00:33:43.795271 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-03-07 00:33:43.795291 | orchestrator | Saturday 07 March 2026 00:33:41 +0000 (0:00:01.686) 0:06:55.361 ******** 2026-03-07 00:33:43.795310 | orchestrator | ok: [testbed-manager] 2026-03-07 00:33:43.795329 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:33:43.795347 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:33:43.795366 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:33:43.795378 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:33:43.795388 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:33:43.795398 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:33:43.795409 | orchestrator | 2026-03-07 00:33:43.795420 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-03-07 00:33:43.795431 | orchestrator | Saturday 07 March 2026 00:33:42 +0000 (0:00:01.375) 0:06:56.736 ******** 2026-03-07 00:33:43.795442 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:33:43.795453 | orchestrator | 2026-03-07 00:33:43.795463 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-07 00:33:43.795474 | orchestrator | Saturday 07 March 2026 00:33:43 +0000 (0:00:00.937) 0:06:57.674 ******** 2026-03-07 00:33:43.795485 | orchestrator | 2026-03-07 00:33:43.795495 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-07 00:33:43.795506 | orchestrator | Saturday 07 March 2026 00:33:43 +0000 (0:00:00.041) 0:06:57.715 ******** 2026-03-07 00:33:43.795526 | orchestrator | 2026-03-07 00:33:43.795537 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-07 00:33:43.795547 | orchestrator | Saturday 07 March 2026 00:33:43 +0000 (0:00:00.051) 0:06:57.767 ******** 2026-03-07 00:33:43.795558 | orchestrator | 2026-03-07 00:33:43.795600 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-07 00:33:43.795630 | orchestrator | Saturday 07 March 2026 00:33:43 +0000 (0:00:00.040) 0:06:57.807 ******** 2026-03-07 00:34:09.974835 | orchestrator | 2026-03-07 00:34:09.974949 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-07 00:34:09.974965 | orchestrator | Saturday 07 March 2026 00:33:43 +0000 (0:00:00.039) 0:06:57.847 ******** 2026-03-07 00:34:09.974976 | orchestrator | 2026-03-07 00:34:09.974986 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-07 00:34:09.974996 | orchestrator | Saturday 07 March 2026 00:33:43 +0000 (0:00:00.048) 0:06:57.895 ******** 2026-03-07 00:34:09.975006 | orchestrator | 2026-03-07 00:34:09.975016 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-07 00:34:09.975026 | orchestrator | Saturday 07 March 2026 00:33:43 +0000 (0:00:00.040) 0:06:57.936 ******** 2026-03-07 00:34:09.975036 | orchestrator | 2026-03-07 00:34:09.975046 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-07 00:34:09.975056 | orchestrator | Saturday 07 March 2026 00:33:43 +0000 (0:00:00.040) 0:06:57.976 ******** 2026-03-07 00:34:09.975068 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:34:09.975080 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:34:09.975091 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:34:09.975118 | orchestrator | 2026-03-07 00:34:09.975129 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-03-07 00:34:09.975151 | orchestrator | Saturday 07 March 2026 00:33:45 +0000 (0:00:01.228) 0:06:59.205 ******** 2026-03-07 00:34:09.975162 | orchestrator | changed: [testbed-manager] 2026-03-07 00:34:09.975175 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:34:09.975186 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:34:09.975197 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:34:09.975208 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:34:09.975219 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:34:09.975230 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:34:09.975241 | orchestrator | 2026-03-07 00:34:09.975252 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-03-07 00:34:09.975263 | orchestrator | Saturday 07 March 2026 00:33:46 +0000 (0:00:01.689) 0:07:00.894 ******** 2026-03-07 00:34:09.975275 | orchestrator | changed: [testbed-manager] 2026-03-07 00:34:09.975286 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:34:09.975297 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:34:09.975307 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:34:09.975318 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:34:09.975329 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:34:09.975340 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:34:09.975351 | orchestrator | 2026-03-07 00:34:09.975365 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-03-07 00:34:09.975377 | orchestrator | Saturday 07 March 2026 00:33:47 +0000 (0:00:01.234) 0:07:02.129 ******** 2026-03-07 00:34:09.975390 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:34:09.975403 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:34:09.975415 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:34:09.975428 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:34:09.975441 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:34:09.975454 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:34:09.975467 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:34:09.975481 | orchestrator | 2026-03-07 00:34:09.975494 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-03-07 00:34:09.975507 | orchestrator | Saturday 07 March 2026 00:33:50 +0000 (0:00:02.435) 0:07:04.564 ******** 2026-03-07 00:34:09.975569 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:34:09.975584 | orchestrator | 2026-03-07 00:34:09.975612 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-03-07 00:34:09.975626 | orchestrator | Saturday 07 March 2026 00:33:50 +0000 (0:00:00.089) 0:07:04.654 ******** 2026-03-07 00:34:09.975639 | orchestrator | ok: [testbed-manager] 2026-03-07 00:34:09.975652 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:34:09.975665 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:34:09.975678 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:34:09.975691 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:34:09.975704 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:34:09.975717 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:34:09.975730 | orchestrator | 2026-03-07 00:34:09.975743 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-03-07 00:34:09.975757 | orchestrator | Saturday 07 March 2026 00:33:51 +0000 (0:00:01.032) 0:07:05.687 ******** 2026-03-07 00:34:09.975770 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:34:09.975782 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:34:09.975793 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:34:09.975803 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:34:09.975814 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:34:09.975825 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:34:09.975835 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:34:09.975846 | orchestrator | 2026-03-07 00:34:09.975857 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-03-07 00:34:09.975868 | orchestrator | Saturday 07 March 2026 00:33:52 +0000 (0:00:00.521) 0:07:06.208 ******** 2026-03-07 00:34:09.975880 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:34:09.975894 | orchestrator | 2026-03-07 00:34:09.975905 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-03-07 00:34:09.975916 | orchestrator | Saturday 07 March 2026 00:33:53 +0000 (0:00:01.070) 0:07:07.278 ******** 2026-03-07 00:34:09.975926 | orchestrator | ok: [testbed-manager] 2026-03-07 00:34:09.975937 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:34:09.975948 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:34:09.975959 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:34:09.975970 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:34:09.975980 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:34:09.975991 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:34:09.976002 | orchestrator | 2026-03-07 00:34:09.976013 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-03-07 00:34:09.976024 | orchestrator | Saturday 07 March 2026 00:33:53 +0000 (0:00:00.899) 0:07:08.177 ******** 2026-03-07 00:34:09.976035 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-03-07 00:34:09.976064 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-03-07 00:34:09.976076 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-03-07 00:34:09.976087 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-03-07 00:34:09.976098 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-03-07 00:34:09.976109 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-03-07 00:34:09.976119 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-03-07 00:34:09.976131 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-03-07 00:34:09.976141 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-03-07 00:34:09.976152 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-03-07 00:34:09.976163 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-03-07 00:34:09.976174 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-03-07 00:34:09.976185 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-03-07 00:34:09.976202 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-03-07 00:34:09.976213 | orchestrator | 2026-03-07 00:34:09.976224 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-03-07 00:34:09.976235 | orchestrator | Saturday 07 March 2026 00:33:56 +0000 (0:00:02.488) 0:07:10.666 ******** 2026-03-07 00:34:09.976246 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:34:09.976256 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:34:09.976267 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:34:09.976278 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:34:09.976288 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:34:09.976299 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:34:09.976310 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:34:09.976321 | orchestrator | 2026-03-07 00:34:09.976331 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-03-07 00:34:09.976343 | orchestrator | Saturday 07 March 2026 00:33:57 +0000 (0:00:00.678) 0:07:11.344 ******** 2026-03-07 00:34:09.976355 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:34:09.976369 | orchestrator | 2026-03-07 00:34:09.976380 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-03-07 00:34:09.976391 | orchestrator | Saturday 07 March 2026 00:33:58 +0000 (0:00:00.883) 0:07:12.228 ******** 2026-03-07 00:34:09.976402 | orchestrator | ok: [testbed-manager] 2026-03-07 00:34:09.976413 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:34:09.976424 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:34:09.976435 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:34:09.976446 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:34:09.976457 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:34:09.976467 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:34:09.976478 | orchestrator | 2026-03-07 00:34:09.976489 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-03-07 00:34:09.976500 | orchestrator | Saturday 07 March 2026 00:33:58 +0000 (0:00:00.860) 0:07:13.088 ******** 2026-03-07 00:34:09.976511 | orchestrator | ok: [testbed-manager] 2026-03-07 00:34:09.976527 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:34:09.976587 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:34:09.976599 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:34:09.976609 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:34:09.976620 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:34:09.976631 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:34:09.976641 | orchestrator | 2026-03-07 00:34:09.976652 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-03-07 00:34:09.976663 | orchestrator | Saturday 07 March 2026 00:33:59 +0000 (0:00:01.022) 0:07:14.111 ******** 2026-03-07 00:34:09.976675 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:34:09.976686 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:34:09.976696 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:34:09.976707 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:34:09.976718 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:34:09.976729 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:34:09.976739 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:34:09.976750 | orchestrator | 2026-03-07 00:34:09.976761 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-03-07 00:34:09.976772 | orchestrator | Saturday 07 March 2026 00:34:00 +0000 (0:00:00.523) 0:07:14.634 ******** 2026-03-07 00:34:09.976783 | orchestrator | ok: [testbed-manager] 2026-03-07 00:34:09.976793 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:34:09.976804 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:34:09.976815 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:34:09.976826 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:34:09.976837 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:34:09.976854 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:34:09.976865 | orchestrator | 2026-03-07 00:34:09.976888 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-03-07 00:34:09.976900 | orchestrator | Saturday 07 March 2026 00:34:01 +0000 (0:00:01.485) 0:07:16.119 ******** 2026-03-07 00:34:09.976921 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:34:09.976933 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:34:09.976944 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:34:09.976954 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:34:09.976965 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:34:09.976976 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:34:09.976987 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:34:09.976997 | orchestrator | 2026-03-07 00:34:09.977008 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-03-07 00:34:09.977019 | orchestrator | Saturday 07 March 2026 00:34:02 +0000 (0:00:00.515) 0:07:16.635 ******** 2026-03-07 00:34:09.977030 | orchestrator | ok: [testbed-manager] 2026-03-07 00:34:09.977041 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:34:09.977052 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:34:09.977063 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:34:09.977074 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:34:09.977085 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:34:09.977103 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:34:42.608272 | orchestrator | 2026-03-07 00:34:42.608387 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-03-07 00:34:42.608406 | orchestrator | Saturday 07 March 2026 00:34:09 +0000 (0:00:07.529) 0:07:24.164 ******** 2026-03-07 00:34:42.608418 | orchestrator | ok: [testbed-manager] 2026-03-07 00:34:42.608430 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:34:42.608442 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:34:42.608453 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:34:42.608464 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:34:42.608475 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:34:42.608547 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:34:42.608562 | orchestrator | 2026-03-07 00:34:42.608574 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-03-07 00:34:42.608585 | orchestrator | Saturday 07 March 2026 00:34:11 +0000 (0:00:01.584) 0:07:25.749 ******** 2026-03-07 00:34:42.608596 | orchestrator | ok: [testbed-manager] 2026-03-07 00:34:42.608607 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:34:42.608618 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:34:42.608628 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:34:42.608639 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:34:42.608650 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:34:42.608661 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:34:42.608671 | orchestrator | 2026-03-07 00:34:42.608682 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-03-07 00:34:42.608694 | orchestrator | Saturday 07 March 2026 00:34:13 +0000 (0:00:01.775) 0:07:27.525 ******** 2026-03-07 00:34:42.608704 | orchestrator | ok: [testbed-manager] 2026-03-07 00:34:42.608715 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:34:42.608726 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:34:42.608736 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:34:42.608747 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:34:42.608758 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:34:42.608769 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:34:42.608780 | orchestrator | 2026-03-07 00:34:42.608793 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-07 00:34:42.608806 | orchestrator | Saturday 07 March 2026 00:34:15 +0000 (0:00:01.752) 0:07:29.277 ******** 2026-03-07 00:34:42.608818 | orchestrator | ok: [testbed-manager] 2026-03-07 00:34:42.608831 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:34:42.608843 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:34:42.608856 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:34:42.608897 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:34:42.608910 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:34:42.608922 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:34:42.608933 | orchestrator | 2026-03-07 00:34:42.608944 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-07 00:34:42.608955 | orchestrator | Saturday 07 March 2026 00:34:15 +0000 (0:00:00.846) 0:07:30.123 ******** 2026-03-07 00:34:42.608966 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:34:42.608976 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:34:42.608988 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:34:42.608998 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:34:42.609009 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:34:42.609019 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:34:42.609030 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:34:42.609040 | orchestrator | 2026-03-07 00:34:42.609051 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-03-07 00:34:42.609062 | orchestrator | Saturday 07 March 2026 00:34:16 +0000 (0:00:00.992) 0:07:31.116 ******** 2026-03-07 00:34:42.609073 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:34:42.609084 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:34:42.609094 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:34:42.609105 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:34:42.609116 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:34:42.609126 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:34:42.609137 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:34:42.609147 | orchestrator | 2026-03-07 00:34:42.609158 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-03-07 00:34:42.609169 | orchestrator | Saturday 07 March 2026 00:34:17 +0000 (0:00:00.523) 0:07:31.639 ******** 2026-03-07 00:34:42.609179 | orchestrator | ok: [testbed-manager] 2026-03-07 00:34:42.609209 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:34:42.609220 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:34:42.609231 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:34:42.609242 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:34:42.609252 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:34:42.609263 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:34:42.609274 | orchestrator | 2026-03-07 00:34:42.609285 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-03-07 00:34:42.609296 | orchestrator | Saturday 07 March 2026 00:34:17 +0000 (0:00:00.512) 0:07:32.152 ******** 2026-03-07 00:34:42.609308 | orchestrator | ok: [testbed-manager] 2026-03-07 00:34:42.609326 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:34:42.609345 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:34:42.609372 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:34:42.609392 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:34:42.609409 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:34:42.609427 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:34:42.609444 | orchestrator | 2026-03-07 00:34:42.609460 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-03-07 00:34:42.609476 | orchestrator | Saturday 07 March 2026 00:34:18 +0000 (0:00:00.573) 0:07:32.725 ******** 2026-03-07 00:34:42.609522 | orchestrator | ok: [testbed-manager] 2026-03-07 00:34:42.609540 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:34:42.609557 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:34:42.609577 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:34:42.609595 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:34:42.609614 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:34:42.609627 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:34:42.609638 | orchestrator | 2026-03-07 00:34:42.609648 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-03-07 00:34:42.609660 | orchestrator | Saturday 07 March 2026 00:34:19 +0000 (0:00:00.695) 0:07:33.421 ******** 2026-03-07 00:34:42.609670 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:34:42.609681 | orchestrator | ok: [testbed-manager] 2026-03-07 00:34:42.609692 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:34:42.609715 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:34:42.609726 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:34:42.609736 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:34:42.609747 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:34:42.609758 | orchestrator | 2026-03-07 00:34:42.609791 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-03-07 00:34:42.609803 | orchestrator | Saturday 07 March 2026 00:34:24 +0000 (0:00:05.273) 0:07:38.694 ******** 2026-03-07 00:34:42.609814 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:34:42.609825 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:34:42.609836 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:34:42.609847 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:34:42.609858 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:34:42.609869 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:34:42.609880 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:34:42.609891 | orchestrator | 2026-03-07 00:34:42.609902 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-03-07 00:34:42.609913 | orchestrator | Saturday 07 March 2026 00:34:25 +0000 (0:00:00.539) 0:07:39.234 ******** 2026-03-07 00:34:42.609926 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:34:42.609940 | orchestrator | 2026-03-07 00:34:42.609951 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-03-07 00:34:42.609962 | orchestrator | Saturday 07 March 2026 00:34:26 +0000 (0:00:01.002) 0:07:40.237 ******** 2026-03-07 00:34:42.609973 | orchestrator | ok: [testbed-manager] 2026-03-07 00:34:42.609984 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:34:42.609998 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:34:42.610090 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:34:42.610104 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:34:42.610115 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:34:42.610126 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:34:42.610137 | orchestrator | 2026-03-07 00:34:42.610148 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-03-07 00:34:42.610197 | orchestrator | Saturday 07 March 2026 00:34:28 +0000 (0:00:02.118) 0:07:42.356 ******** 2026-03-07 00:34:42.610209 | orchestrator | ok: [testbed-manager] 2026-03-07 00:34:42.610220 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:34:42.610231 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:34:42.610242 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:34:42.610252 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:34:42.610263 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:34:42.610274 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:34:42.610285 | orchestrator | 2026-03-07 00:34:42.610296 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-03-07 00:34:42.610307 | orchestrator | Saturday 07 March 2026 00:34:29 +0000 (0:00:01.228) 0:07:43.584 ******** 2026-03-07 00:34:42.610318 | orchestrator | ok: [testbed-manager] 2026-03-07 00:34:42.610328 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:34:42.610339 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:34:42.610350 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:34:42.610361 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:34:42.610372 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:34:42.610382 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:34:42.610393 | orchestrator | 2026-03-07 00:34:42.610404 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-03-07 00:34:42.610415 | orchestrator | Saturday 07 March 2026 00:34:30 +0000 (0:00:00.848) 0:07:44.432 ******** 2026-03-07 00:34:42.610434 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-07 00:34:42.610448 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-07 00:34:42.610468 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-07 00:34:42.610479 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-07 00:34:42.610532 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-07 00:34:42.610592 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-07 00:34:42.610614 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-07 00:34:42.610650 | orchestrator | 2026-03-07 00:34:42.610671 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-03-07 00:34:42.610689 | orchestrator | Saturday 07 March 2026 00:34:32 +0000 (0:00:01.884) 0:07:46.317 ******** 2026-03-07 00:34:42.610705 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:34:42.610717 | orchestrator | 2026-03-07 00:34:42.610727 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-03-07 00:34:42.610738 | orchestrator | Saturday 07 March 2026 00:34:32 +0000 (0:00:00.826) 0:07:47.143 ******** 2026-03-07 00:34:42.610749 | orchestrator | changed: [testbed-manager] 2026-03-07 00:34:42.610760 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:34:42.610771 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:34:42.610782 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:34:42.610793 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:34:42.610804 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:34:42.610815 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:34:42.610825 | orchestrator | 2026-03-07 00:34:42.610848 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-03-07 00:35:13.040956 | orchestrator | Saturday 07 March 2026 00:34:42 +0000 (0:00:09.655) 0:07:56.799 ******** 2026-03-07 00:35:13.041110 | orchestrator | ok: [testbed-manager] 2026-03-07 00:35:13.041135 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:35:13.041154 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:35:13.041172 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:35:13.041191 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:35:13.041209 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:35:13.041228 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:35:13.041248 | orchestrator | 2026-03-07 00:35:13.041269 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-03-07 00:35:13.041288 | orchestrator | Saturday 07 March 2026 00:34:44 +0000 (0:00:01.983) 0:07:58.782 ******** 2026-03-07 00:35:13.041306 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:35:13.041323 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:35:13.041342 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:35:13.041360 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:35:13.041381 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:35:13.041402 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:35:13.041424 | orchestrator | 2026-03-07 00:35:13.041472 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-03-07 00:35:13.041493 | orchestrator | Saturday 07 March 2026 00:34:45 +0000 (0:00:01.306) 0:08:00.089 ******** 2026-03-07 00:35:13.041511 | orchestrator | changed: [testbed-manager] 2026-03-07 00:35:13.041529 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:35:13.041549 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:35:13.041570 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:35:13.041593 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:35:13.041650 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:35:13.041672 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:35:13.041694 | orchestrator | 2026-03-07 00:35:13.041713 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-03-07 00:35:13.041732 | orchestrator | 2026-03-07 00:35:13.041753 | orchestrator | TASK [Include hardening role] ************************************************** 2026-03-07 00:35:13.041773 | orchestrator | Saturday 07 March 2026 00:34:47 +0000 (0:00:01.257) 0:08:01.346 ******** 2026-03-07 00:35:13.041793 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:35:13.041813 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:35:13.041833 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:35:13.041853 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:35:13.041873 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:35:13.041893 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:35:13.041913 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:35:13.041934 | orchestrator | 2026-03-07 00:35:13.041952 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-03-07 00:35:13.041971 | orchestrator | 2026-03-07 00:35:13.041992 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-03-07 00:35:13.042013 | orchestrator | Saturday 07 March 2026 00:34:47 +0000 (0:00:00.632) 0:08:01.979 ******** 2026-03-07 00:35:13.042120 | orchestrator | changed: [testbed-manager] 2026-03-07 00:35:13.042141 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:35:13.042160 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:35:13.042178 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:35:13.042196 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:35:13.042214 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:35:13.042233 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:35:13.042256 | orchestrator | 2026-03-07 00:35:13.042273 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-03-07 00:35:13.042309 | orchestrator | Saturday 07 March 2026 00:34:49 +0000 (0:00:01.367) 0:08:03.346 ******** 2026-03-07 00:35:13.042327 | orchestrator | ok: [testbed-manager] 2026-03-07 00:35:13.042343 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:35:13.042360 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:35:13.042378 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:35:13.042394 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:35:13.042411 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:35:13.042427 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:35:13.042472 | orchestrator | 2026-03-07 00:35:13.042494 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-03-07 00:35:13.042511 | orchestrator | Saturday 07 March 2026 00:34:50 +0000 (0:00:01.350) 0:08:04.697 ******** 2026-03-07 00:35:13.042527 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:35:13.042544 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:35:13.042559 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:35:13.042576 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:35:13.042592 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:35:13.042608 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:35:13.042625 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:35:13.042641 | orchestrator | 2026-03-07 00:35:13.042658 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-03-07 00:35:13.042674 | orchestrator | Saturday 07 March 2026 00:34:50 +0000 (0:00:00.404) 0:08:05.102 ******** 2026-03-07 00:35:13.042691 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:35:13.042710 | orchestrator | 2026-03-07 00:35:13.042726 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-03-07 00:35:13.042743 | orchestrator | Saturday 07 March 2026 00:34:51 +0000 (0:00:00.800) 0:08:05.902 ******** 2026-03-07 00:35:13.042762 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:35:13.042803 | orchestrator | 2026-03-07 00:35:13.042819 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-03-07 00:35:13.042836 | orchestrator | Saturday 07 March 2026 00:34:52 +0000 (0:00:00.686) 0:08:06.589 ******** 2026-03-07 00:35:13.042852 | orchestrator | changed: [testbed-manager] 2026-03-07 00:35:13.042867 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:35:13.042884 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:35:13.042899 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:35:13.042915 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:35:13.042931 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:35:13.042947 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:35:13.042963 | orchestrator | 2026-03-07 00:35:13.043012 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-03-07 00:35:13.043029 | orchestrator | Saturday 07 March 2026 00:35:01 +0000 (0:00:09.337) 0:08:15.926 ******** 2026-03-07 00:35:13.043045 | orchestrator | changed: [testbed-manager] 2026-03-07 00:35:13.043060 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:35:13.043077 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:35:13.043093 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:35:13.043108 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:35:13.043124 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:35:13.043140 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:35:13.043156 | orchestrator | 2026-03-07 00:35:13.043172 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-03-07 00:35:13.043188 | orchestrator | Saturday 07 March 2026 00:35:02 +0000 (0:00:00.844) 0:08:16.771 ******** 2026-03-07 00:35:13.043205 | orchestrator | changed: [testbed-manager] 2026-03-07 00:35:13.043221 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:35:13.043237 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:35:13.043253 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:35:13.043268 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:35:13.043283 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:35:13.043299 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:35:13.043314 | orchestrator | 2026-03-07 00:35:13.043330 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-03-07 00:35:13.043346 | orchestrator | Saturday 07 March 2026 00:35:03 +0000 (0:00:01.351) 0:08:18.122 ******** 2026-03-07 00:35:13.043362 | orchestrator | changed: [testbed-manager] 2026-03-07 00:35:13.043378 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:35:13.043394 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:35:13.043409 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:35:13.043425 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:35:13.043441 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:35:13.043546 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:35:13.043565 | orchestrator | 2026-03-07 00:35:13.043580 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-03-07 00:35:13.043597 | orchestrator | Saturday 07 March 2026 00:35:05 +0000 (0:00:01.950) 0:08:20.072 ******** 2026-03-07 00:35:13.043615 | orchestrator | changed: [testbed-manager] 2026-03-07 00:35:13.043631 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:35:13.043647 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:35:13.043663 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:35:13.043680 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:35:13.043698 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:35:13.043714 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:35:13.043735 | orchestrator | 2026-03-07 00:35:13.043754 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-03-07 00:35:13.043771 | orchestrator | Saturday 07 March 2026 00:35:07 +0000 (0:00:01.258) 0:08:21.331 ******** 2026-03-07 00:35:13.043789 | orchestrator | changed: [testbed-manager] 2026-03-07 00:35:13.043807 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:35:13.043824 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:35:13.043857 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:35:13.043875 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:35:13.043892 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:35:13.043909 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:35:13.043927 | orchestrator | 2026-03-07 00:35:13.043944 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-03-07 00:35:13.043962 | orchestrator | 2026-03-07 00:35:13.043990 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-03-07 00:35:13.044008 | orchestrator | Saturday 07 March 2026 00:35:08 +0000 (0:00:01.046) 0:08:22.377 ******** 2026-03-07 00:35:13.044026 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:35:13.044045 | orchestrator | 2026-03-07 00:35:13.044063 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-07 00:35:13.044081 | orchestrator | Saturday 07 March 2026 00:35:08 +0000 (0:00:00.803) 0:08:23.181 ******** 2026-03-07 00:35:13.044097 | orchestrator | ok: [testbed-manager] 2026-03-07 00:35:13.044114 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:35:13.044132 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:35:13.044150 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:35:13.044167 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:35:13.044184 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:35:13.044201 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:35:13.044218 | orchestrator | 2026-03-07 00:35:13.044236 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-07 00:35:13.044255 | orchestrator | Saturday 07 March 2026 00:35:10 +0000 (0:00:01.041) 0:08:24.222 ******** 2026-03-07 00:35:13.044273 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:35:13.044291 | orchestrator | changed: [testbed-manager] 2026-03-07 00:35:13.044309 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:35:13.044328 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:35:13.044347 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:35:13.044367 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:35:13.044387 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:35:13.044407 | orchestrator | 2026-03-07 00:35:13.044426 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-03-07 00:35:13.044475 | orchestrator | Saturday 07 March 2026 00:35:11 +0000 (0:00:01.165) 0:08:25.388 ******** 2026-03-07 00:35:13.044497 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:35:13.044516 | orchestrator | 2026-03-07 00:35:13.044535 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-07 00:35:13.044554 | orchestrator | Saturday 07 March 2026 00:35:12 +0000 (0:00:01.005) 0:08:26.393 ******** 2026-03-07 00:35:13.044572 | orchestrator | ok: [testbed-manager] 2026-03-07 00:35:13.044588 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:35:13.044603 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:35:13.044620 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:35:13.044636 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:35:13.044655 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:35:13.044674 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:35:13.044691 | orchestrator | 2026-03-07 00:35:13.044731 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-07 00:35:14.626230 | orchestrator | Saturday 07 March 2026 00:35:13 +0000 (0:00:00.840) 0:08:27.234 ******** 2026-03-07 00:35:14.626335 | orchestrator | changed: [testbed-manager] 2026-03-07 00:35:14.626352 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:35:14.626364 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:35:14.626375 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:35:14.626386 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:35:14.626397 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:35:14.626408 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:35:14.626419 | orchestrator | 2026-03-07 00:35:14.626503 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:35:14.626518 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-03-07 00:35:14.626531 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-07 00:35:14.626542 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-07 00:35:14.626554 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-07 00:35:14.626564 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-03-07 00:35:14.626575 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-07 00:35:14.626586 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-07 00:35:14.626597 | orchestrator | 2026-03-07 00:35:14.626608 | orchestrator | 2026-03-07 00:35:14.626619 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:35:14.626630 | orchestrator | Saturday 07 March 2026 00:35:14 +0000 (0:00:01.140) 0:08:28.374 ******** 2026-03-07 00:35:14.626641 | orchestrator | =============================================================================== 2026-03-07 00:35:14.626652 | orchestrator | osism.commons.packages : Install required packages --------------------- 87.78s 2026-03-07 00:35:14.626662 | orchestrator | osism.commons.packages : Download required packages -------------------- 33.96s 2026-03-07 00:35:14.626673 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.88s 2026-03-07 00:35:14.626684 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.57s 2026-03-07 00:35:14.626695 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 13.59s 2026-03-07 00:35:14.626721 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 12.84s 2026-03-07 00:35:14.626732 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.39s 2026-03-07 00:35:14.626743 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.96s 2026-03-07 00:35:14.626755 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.66s 2026-03-07 00:35:14.626767 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 9.34s 2026-03-07 00:35:14.626780 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.33s 2026-03-07 00:35:14.626792 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.43s 2026-03-07 00:35:14.626804 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.38s 2026-03-07 00:35:14.626817 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.07s 2026-03-07 00:35:14.626829 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 7.94s 2026-03-07 00:35:14.626841 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.62s 2026-03-07 00:35:14.626854 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.53s 2026-03-07 00:35:14.626867 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.88s 2026-03-07 00:35:14.626879 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 6.55s 2026-03-07 00:35:14.626891 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.30s 2026-03-07 00:35:14.901164 | orchestrator | + osism apply fail2ban 2026-03-07 00:35:27.525106 | orchestrator | 2026-03-07 00:35:27 | INFO  | Task 60331dc3-5f7c-4c3f-b24d-0ba51d79d3f3 (fail2ban) was prepared for execution. 2026-03-07 00:35:27.525234 | orchestrator | 2026-03-07 00:35:27 | INFO  | It takes a moment until task 60331dc3-5f7c-4c3f-b24d-0ba51d79d3f3 (fail2ban) has been started and output is visible here. 2026-03-07 00:35:49.560019 | orchestrator | 2026-03-07 00:35:49.560131 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-03-07 00:35:49.560148 | orchestrator | 2026-03-07 00:35:49.560160 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-03-07 00:35:49.560171 | orchestrator | Saturday 07 March 2026 00:35:32 +0000 (0:00:00.268) 0:00:00.268 ******** 2026-03-07 00:35:49.560185 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:35:49.560199 | orchestrator | 2026-03-07 00:35:49.560211 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-03-07 00:35:49.560223 | orchestrator | Saturday 07 March 2026 00:35:33 +0000 (0:00:01.149) 0:00:01.418 ******** 2026-03-07 00:35:49.560235 | orchestrator | changed: [testbed-manager] 2026-03-07 00:35:49.560247 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:35:49.560259 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:35:49.560270 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:35:49.560282 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:35:49.560293 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:35:49.560304 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:35:49.560316 | orchestrator | 2026-03-07 00:35:49.560328 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-03-07 00:35:49.560340 | orchestrator | Saturday 07 March 2026 00:35:44 +0000 (0:00:11.210) 0:00:12.629 ******** 2026-03-07 00:35:49.560351 | orchestrator | changed: [testbed-manager] 2026-03-07 00:35:49.560363 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:35:49.560374 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:35:49.560385 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:35:49.560463 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:35:49.560482 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:35:49.560498 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:35:49.560509 | orchestrator | 2026-03-07 00:35:49.560520 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-03-07 00:35:49.560532 | orchestrator | Saturday 07 March 2026 00:35:46 +0000 (0:00:01.538) 0:00:14.167 ******** 2026-03-07 00:35:49.560543 | orchestrator | ok: [testbed-manager] 2026-03-07 00:35:49.560555 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:35:49.560566 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:35:49.560577 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:35:49.560587 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:35:49.560598 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:35:49.560609 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:35:49.560620 | orchestrator | 2026-03-07 00:35:49.560631 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-03-07 00:35:49.560642 | orchestrator | Saturday 07 March 2026 00:35:47 +0000 (0:00:01.452) 0:00:15.620 ******** 2026-03-07 00:35:49.560653 | orchestrator | changed: [testbed-manager] 2026-03-07 00:35:49.560664 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:35:49.560675 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:35:49.560686 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:35:49.560697 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:35:49.560708 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:35:49.560719 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:35:49.560729 | orchestrator | 2026-03-07 00:35:49.560740 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:35:49.560752 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:35:49.560791 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:35:49.560804 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:35:49.560815 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:35:49.560826 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:35:49.560837 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:35:49.560848 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:35:49.560859 | orchestrator | 2026-03-07 00:35:49.560870 | orchestrator | 2026-03-07 00:35:49.560881 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:35:49.560892 | orchestrator | Saturday 07 March 2026 00:35:49 +0000 (0:00:01.651) 0:00:17.271 ******** 2026-03-07 00:35:49.560902 | orchestrator | =============================================================================== 2026-03-07 00:35:49.560913 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.21s 2026-03-07 00:35:49.560924 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.65s 2026-03-07 00:35:49.560935 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.54s 2026-03-07 00:35:49.560946 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.45s 2026-03-07 00:35:49.560957 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.15s 2026-03-07 00:35:49.826299 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-03-07 00:35:49.826481 | orchestrator | + osism apply network 2026-03-07 00:36:01.833437 | orchestrator | 2026-03-07 00:36:01 | INFO  | Task 2837b03b-c5d7-4d5e-96b9-2a3d51298e78 (network) was prepared for execution. 2026-03-07 00:36:01.833558 | orchestrator | 2026-03-07 00:36:01 | INFO  | It takes a moment until task 2837b03b-c5d7-4d5e-96b9-2a3d51298e78 (network) has been started and output is visible here. 2026-03-07 00:36:30.358224 | orchestrator | 2026-03-07 00:36:30.358333 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-03-07 00:36:30.358415 | orchestrator | 2026-03-07 00:36:30.358428 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-03-07 00:36:30.358440 | orchestrator | Saturday 07 March 2026 00:36:06 +0000 (0:00:00.253) 0:00:00.253 ******** 2026-03-07 00:36:30.358451 | orchestrator | ok: [testbed-manager] 2026-03-07 00:36:30.358463 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:36:30.358474 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:36:30.358485 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:36:30.358496 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:36:30.358506 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:36:30.358517 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:36:30.358528 | orchestrator | 2026-03-07 00:36:30.358539 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-03-07 00:36:30.358550 | orchestrator | Saturday 07 March 2026 00:36:06 +0000 (0:00:00.728) 0:00:00.981 ******** 2026-03-07 00:36:30.358564 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:36:30.358577 | orchestrator | 2026-03-07 00:36:30.358589 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-03-07 00:36:30.358600 | orchestrator | Saturday 07 March 2026 00:36:08 +0000 (0:00:01.224) 0:00:02.205 ******** 2026-03-07 00:36:30.358635 | orchestrator | ok: [testbed-manager] 2026-03-07 00:36:30.358646 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:36:30.358657 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:36:30.358668 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:36:30.358678 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:36:30.358689 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:36:30.358699 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:36:30.358710 | orchestrator | 2026-03-07 00:36:30.358721 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-03-07 00:36:30.358732 | orchestrator | Saturday 07 March 2026 00:36:10 +0000 (0:00:02.082) 0:00:04.288 ******** 2026-03-07 00:36:30.358742 | orchestrator | ok: [testbed-manager] 2026-03-07 00:36:30.358753 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:36:30.358765 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:36:30.358779 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:36:30.358791 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:36:30.358803 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:36:30.358815 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:36:30.358828 | orchestrator | 2026-03-07 00:36:30.358840 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-03-07 00:36:30.358852 | orchestrator | Saturday 07 March 2026 00:36:12 +0000 (0:00:01.946) 0:00:06.235 ******** 2026-03-07 00:36:30.358865 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-03-07 00:36:30.358878 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-03-07 00:36:30.358890 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-03-07 00:36:30.358903 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-03-07 00:36:30.358916 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-03-07 00:36:30.358928 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-03-07 00:36:30.358941 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-03-07 00:36:30.358953 | orchestrator | 2026-03-07 00:36:30.358981 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-03-07 00:36:30.358995 | orchestrator | Saturday 07 March 2026 00:36:13 +0000 (0:00:00.950) 0:00:07.185 ******** 2026-03-07 00:36:30.359013 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-07 00:36:30.359027 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-07 00:36:30.359039 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-07 00:36:30.359052 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-07 00:36:30.359065 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-07 00:36:30.359077 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-07 00:36:30.359089 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-07 00:36:30.359102 | orchestrator | 2026-03-07 00:36:30.359114 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-03-07 00:36:30.359147 | orchestrator | Saturday 07 March 2026 00:36:16 +0000 (0:00:03.182) 0:00:10.368 ******** 2026-03-07 00:36:30.359167 | orchestrator | changed: [testbed-manager] 2026-03-07 00:36:30.359209 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:36:30.359229 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:36:30.359246 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:36:30.359264 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:36:30.359281 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:36:30.359299 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:36:30.359317 | orchestrator | 2026-03-07 00:36:30.359357 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-03-07 00:36:30.359378 | orchestrator | Saturday 07 March 2026 00:36:17 +0000 (0:00:01.686) 0:00:12.055 ******** 2026-03-07 00:36:30.359397 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-07 00:36:30.359416 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-07 00:36:30.359435 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-07 00:36:30.359453 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-07 00:36:30.359471 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-07 00:36:30.359505 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-07 00:36:30.359524 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-07 00:36:30.359543 | orchestrator | 2026-03-07 00:36:30.359561 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-03-07 00:36:30.359579 | orchestrator | Saturday 07 March 2026 00:36:19 +0000 (0:00:01.689) 0:00:13.744 ******** 2026-03-07 00:36:30.359598 | orchestrator | ok: [testbed-manager] 2026-03-07 00:36:30.359617 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:36:30.359637 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:36:30.359655 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:36:30.359674 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:36:30.359693 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:36:30.359712 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:36:30.359729 | orchestrator | 2026-03-07 00:36:30.359749 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-03-07 00:36:30.359794 | orchestrator | Saturday 07 March 2026 00:36:20 +0000 (0:00:01.117) 0:00:14.862 ******** 2026-03-07 00:36:30.359813 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:36:30.359832 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:36:30.359851 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:36:30.359869 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:36:30.359887 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:36:30.359905 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:36:30.359924 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:36:30.359943 | orchestrator | 2026-03-07 00:36:30.359961 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-03-07 00:36:30.359980 | orchestrator | Saturday 07 March 2026 00:36:21 +0000 (0:00:00.641) 0:00:15.504 ******** 2026-03-07 00:36:30.359998 | orchestrator | ok: [testbed-manager] 2026-03-07 00:36:30.360017 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:36:30.360036 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:36:30.360055 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:36:30.360074 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:36:30.360091 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:36:30.360110 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:36:30.360128 | orchestrator | 2026-03-07 00:36:30.360149 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-03-07 00:36:30.360167 | orchestrator | Saturday 07 March 2026 00:36:23 +0000 (0:00:02.562) 0:00:18.066 ******** 2026-03-07 00:36:30.360185 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:36:30.360204 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:36:30.360221 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:36:30.360239 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:36:30.360257 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:36:30.360274 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:36:30.360286 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-03-07 00:36:30.360298 | orchestrator | 2026-03-07 00:36:30.360310 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-03-07 00:36:30.360321 | orchestrator | Saturday 07 March 2026 00:36:24 +0000 (0:00:00.895) 0:00:18.962 ******** 2026-03-07 00:36:30.360331 | orchestrator | ok: [testbed-manager] 2026-03-07 00:36:30.360371 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:36:30.360382 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:36:30.360393 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:36:30.360404 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:36:30.360414 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:36:30.360425 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:36:30.360436 | orchestrator | 2026-03-07 00:36:30.360446 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-03-07 00:36:30.360457 | orchestrator | Saturday 07 March 2026 00:36:26 +0000 (0:00:01.707) 0:00:20.669 ******** 2026-03-07 00:36:30.360469 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:36:30.360492 | orchestrator | 2026-03-07 00:36:30.360503 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-07 00:36:30.360513 | orchestrator | Saturday 07 March 2026 00:36:27 +0000 (0:00:01.106) 0:00:21.775 ******** 2026-03-07 00:36:30.360524 | orchestrator | ok: [testbed-manager] 2026-03-07 00:36:30.360535 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:36:30.360546 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:36:30.360556 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:36:30.360567 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:36:30.360585 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:36:30.360596 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:36:30.360607 | orchestrator | 2026-03-07 00:36:30.360618 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-03-07 00:36:30.360629 | orchestrator | Saturday 07 March 2026 00:36:28 +0000 (0:00:01.048) 0:00:22.824 ******** 2026-03-07 00:36:30.360640 | orchestrator | ok: [testbed-manager] 2026-03-07 00:36:30.360650 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:36:30.360661 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:36:30.360672 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:36:30.360683 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:36:30.360693 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:36:30.360704 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:36:30.360714 | orchestrator | 2026-03-07 00:36:30.360725 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-07 00:36:30.360736 | orchestrator | Saturday 07 March 2026 00:36:29 +0000 (0:00:00.580) 0:00:23.404 ******** 2026-03-07 00:36:30.360747 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-03-07 00:36:30.360758 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-03-07 00:36:30.360769 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-03-07 00:36:30.360780 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-03-07 00:36:30.360791 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-07 00:36:30.360801 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-03-07 00:36:30.360812 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-03-07 00:36:30.360823 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-07 00:36:30.360833 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-07 00:36:30.360844 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-07 00:36:30.360855 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-03-07 00:36:30.360866 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-07 00:36:30.360876 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-07 00:36:30.360887 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-07 00:36:30.360898 | orchestrator | 2026-03-07 00:36:30.360918 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-03-07 00:36:44.882630 | orchestrator | Saturday 07 March 2026 00:36:30 +0000 (0:00:01.094) 0:00:24.498 ******** 2026-03-07 00:36:44.883565 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:36:44.883597 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:36:44.883609 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:36:44.883617 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:36:44.883625 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:36:44.883633 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:36:44.883641 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:36:44.883649 | orchestrator | 2026-03-07 00:36:44.883658 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-03-07 00:36:44.883689 | orchestrator | Saturday 07 March 2026 00:36:30 +0000 (0:00:00.469) 0:00:24.968 ******** 2026-03-07 00:36:44.883700 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-1, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-2, testbed-node-3 2026-03-07 00:36:44.883710 | orchestrator | 2026-03-07 00:36:44.883718 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-03-07 00:36:44.883726 | orchestrator | Saturday 07 March 2026 00:36:34 +0000 (0:00:03.885) 0:00:28.854 ******** 2026-03-07 00:36:44.883735 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-03-07 00:36:44.883746 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-03-07 00:36:44.883755 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-03-07 00:36:44.883763 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-03-07 00:36:44.883771 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-03-07 00:36:44.883791 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-03-07 00:36:44.883799 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-03-07 00:36:44.883807 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-03-07 00:36:44.883815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-03-07 00:36:44.883830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-03-07 00:36:44.883838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-03-07 00:36:44.883863 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-03-07 00:36:44.883878 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-03-07 00:36:44.883886 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-03-07 00:36:44.883894 | orchestrator | 2026-03-07 00:36:44.883902 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-03-07 00:36:44.883911 | orchestrator | Saturday 07 March 2026 00:36:39 +0000 (0:00:04.986) 0:00:33.841 ******** 2026-03-07 00:36:44.883919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-03-07 00:36:44.883928 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-03-07 00:36:44.883936 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-03-07 00:36:44.883944 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-03-07 00:36:44.883952 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-03-07 00:36:44.883960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-03-07 00:36:44.883972 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-03-07 00:36:44.883981 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-03-07 00:36:44.883989 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-03-07 00:36:44.883997 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-03-07 00:36:44.884005 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-03-07 00:36:44.884018 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-03-07 00:36:44.884035 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-03-07 00:36:51.541685 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-03-07 00:36:51.541798 | orchestrator | 2026-03-07 00:36:51.541814 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-03-07 00:36:51.541825 | orchestrator | Saturday 07 March 2026 00:36:44 +0000 (0:00:05.181) 0:00:39.022 ******** 2026-03-07 00:36:51.541837 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:36:51.541847 | orchestrator | 2026-03-07 00:36:51.541856 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-07 00:36:51.541866 | orchestrator | Saturday 07 March 2026 00:36:45 +0000 (0:00:01.095) 0:00:40.118 ******** 2026-03-07 00:36:51.541875 | orchestrator | ok: [testbed-manager] 2026-03-07 00:36:51.541884 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:36:51.541893 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:36:51.541902 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:36:51.541910 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:36:51.541919 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:36:51.541928 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:36:51.541937 | orchestrator | 2026-03-07 00:36:51.541946 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-07 00:36:51.541955 | orchestrator | Saturday 07 March 2026 00:36:47 +0000 (0:00:01.809) 0:00:41.928 ******** 2026-03-07 00:36:51.541964 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-07 00:36:51.541973 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-07 00:36:51.541982 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-07 00:36:51.541991 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-07 00:36:51.542000 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:36:51.542010 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-07 00:36:51.542079 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-07 00:36:51.542088 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-07 00:36:51.542098 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-07 00:36:51.542106 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:36:51.542115 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-07 00:36:51.542139 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-07 00:36:51.542149 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-07 00:36:51.542157 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-07 00:36:51.542166 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:36:51.542202 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-07 00:36:51.542217 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-07 00:36:51.542232 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-07 00:36:51.542248 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-07 00:36:51.542262 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:36:51.542277 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-07 00:36:51.542290 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-07 00:36:51.542326 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-07 00:36:51.542344 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-07 00:36:51.542360 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-07 00:36:51.542375 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-07 00:36:51.542387 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-07 00:36:51.542396 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-07 00:36:51.542404 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:36:51.542413 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:36:51.542422 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-07 00:36:51.542430 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-07 00:36:51.542439 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-07 00:36:51.542448 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-07 00:36:51.542456 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:36:51.542465 | orchestrator | 2026-03-07 00:36:51.542474 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-03-07 00:36:51.542501 | orchestrator | Saturday 07 March 2026 00:36:49 +0000 (0:00:02.026) 0:00:43.954 ******** 2026-03-07 00:36:51.542510 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:36:51.542519 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:36:51.542528 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:36:51.542537 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:36:51.542545 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:36:51.542554 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:36:51.542562 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:36:51.542571 | orchestrator | 2026-03-07 00:36:51.542580 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-03-07 00:36:51.542588 | orchestrator | Saturday 07 March 2026 00:36:50 +0000 (0:00:00.684) 0:00:44.639 ******** 2026-03-07 00:36:51.542597 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:36:51.542613 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:36:51.542627 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:36:51.542640 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:36:51.542655 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:36:51.542670 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:36:51.542684 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:36:51.542697 | orchestrator | 2026-03-07 00:36:51.542712 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:36:51.542729 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-07 00:36:51.542745 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-07 00:36:51.542774 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-07 00:36:51.542790 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-07 00:36:51.542804 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-07 00:36:51.542819 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-07 00:36:51.542830 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-07 00:36:51.542838 | orchestrator | 2026-03-07 00:36:51.542847 | orchestrator | 2026-03-07 00:36:51.542856 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:36:51.542864 | orchestrator | Saturday 07 March 2026 00:36:51 +0000 (0:00:00.701) 0:00:45.341 ******** 2026-03-07 00:36:51.542873 | orchestrator | =============================================================================== 2026-03-07 00:36:51.542889 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.18s 2026-03-07 00:36:51.542898 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 4.99s 2026-03-07 00:36:51.542907 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 3.89s 2026-03-07 00:36:51.542915 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.18s 2026-03-07 00:36:51.542924 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.56s 2026-03-07 00:36:51.542932 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.08s 2026-03-07 00:36:51.542941 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.03s 2026-03-07 00:36:51.542949 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.95s 2026-03-07 00:36:51.542958 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.81s 2026-03-07 00:36:51.542967 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.71s 2026-03-07 00:36:51.542975 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.69s 2026-03-07 00:36:51.542983 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.69s 2026-03-07 00:36:51.542992 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.22s 2026-03-07 00:36:51.543001 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.12s 2026-03-07 00:36:51.543009 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.11s 2026-03-07 00:36:51.543018 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.10s 2026-03-07 00:36:51.543026 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.09s 2026-03-07 00:36:51.543035 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.05s 2026-03-07 00:36:51.543043 | orchestrator | osism.commons.network : Create required directories --------------------- 0.95s 2026-03-07 00:36:51.543052 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.90s 2026-03-07 00:36:51.798522 | orchestrator | + osism apply wireguard 2026-03-07 00:37:03.955936 | orchestrator | 2026-03-07 00:37:03 | INFO  | Task 6f00b95d-e5a3-4305-88ba-0ba0c6f66e32 (wireguard) was prepared for execution. 2026-03-07 00:37:03.956040 | orchestrator | 2026-03-07 00:37:03 | INFO  | It takes a moment until task 6f00b95d-e5a3-4305-88ba-0ba0c6f66e32 (wireguard) has been started and output is visible here. 2026-03-07 00:37:22.803588 | orchestrator | 2026-03-07 00:37:22.803701 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-03-07 00:37:22.803739 | orchestrator | 2026-03-07 00:37:22.803750 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-03-07 00:37:22.803760 | orchestrator | Saturday 07 March 2026 00:37:08 +0000 (0:00:00.222) 0:00:00.222 ******** 2026-03-07 00:37:22.803769 | orchestrator | ok: [testbed-manager] 2026-03-07 00:37:22.803779 | orchestrator | 2026-03-07 00:37:22.803788 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-03-07 00:37:22.803797 | orchestrator | Saturday 07 March 2026 00:37:09 +0000 (0:00:01.429) 0:00:01.651 ******** 2026-03-07 00:37:22.803810 | orchestrator | changed: [testbed-manager] 2026-03-07 00:37:22.803827 | orchestrator | 2026-03-07 00:37:22.803846 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-03-07 00:37:22.803862 | orchestrator | Saturday 07 March 2026 00:37:15 +0000 (0:00:05.992) 0:00:07.643 ******** 2026-03-07 00:37:22.803877 | orchestrator | changed: [testbed-manager] 2026-03-07 00:37:22.803891 | orchestrator | 2026-03-07 00:37:22.803906 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-03-07 00:37:22.803921 | orchestrator | Saturday 07 March 2026 00:37:16 +0000 (0:00:00.518) 0:00:08.161 ******** 2026-03-07 00:37:22.803936 | orchestrator | changed: [testbed-manager] 2026-03-07 00:37:22.803952 | orchestrator | 2026-03-07 00:37:22.803967 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-03-07 00:37:22.803982 | orchestrator | Saturday 07 March 2026 00:37:16 +0000 (0:00:00.413) 0:00:08.575 ******** 2026-03-07 00:37:22.803998 | orchestrator | ok: [testbed-manager] 2026-03-07 00:37:22.804008 | orchestrator | 2026-03-07 00:37:22.804016 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-03-07 00:37:22.804025 | orchestrator | Saturday 07 March 2026 00:37:17 +0000 (0:00:00.624) 0:00:09.199 ******** 2026-03-07 00:37:22.804034 | orchestrator | ok: [testbed-manager] 2026-03-07 00:37:22.804047 | orchestrator | 2026-03-07 00:37:22.804061 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-03-07 00:37:22.804075 | orchestrator | Saturday 07 March 2026 00:37:17 +0000 (0:00:00.414) 0:00:09.614 ******** 2026-03-07 00:37:22.804090 | orchestrator | ok: [testbed-manager] 2026-03-07 00:37:22.804104 | orchestrator | 2026-03-07 00:37:22.804117 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-03-07 00:37:22.804132 | orchestrator | Saturday 07 March 2026 00:37:18 +0000 (0:00:00.419) 0:00:10.034 ******** 2026-03-07 00:37:22.804147 | orchestrator | changed: [testbed-manager] 2026-03-07 00:37:22.804162 | orchestrator | 2026-03-07 00:37:22.804175 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-03-07 00:37:22.804304 | orchestrator | Saturday 07 March 2026 00:37:19 +0000 (0:00:01.100) 0:00:11.135 ******** 2026-03-07 00:37:22.804322 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-07 00:37:22.804331 | orchestrator | changed: [testbed-manager] 2026-03-07 00:37:22.804340 | orchestrator | 2026-03-07 00:37:22.804349 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-03-07 00:37:22.804358 | orchestrator | Saturday 07 March 2026 00:37:20 +0000 (0:00:00.894) 0:00:12.029 ******** 2026-03-07 00:37:22.804367 | orchestrator | changed: [testbed-manager] 2026-03-07 00:37:22.804376 | orchestrator | 2026-03-07 00:37:22.804386 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-03-07 00:37:22.804395 | orchestrator | Saturday 07 March 2026 00:37:21 +0000 (0:00:01.583) 0:00:13.613 ******** 2026-03-07 00:37:22.804403 | orchestrator | changed: [testbed-manager] 2026-03-07 00:37:22.804412 | orchestrator | 2026-03-07 00:37:22.804421 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:37:22.804431 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:37:22.804448 | orchestrator | 2026-03-07 00:37:22.804463 | orchestrator | 2026-03-07 00:37:22.804478 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:37:22.804509 | orchestrator | Saturday 07 March 2026 00:37:22 +0000 (0:00:00.884) 0:00:14.498 ******** 2026-03-07 00:37:22.804526 | orchestrator | =============================================================================== 2026-03-07 00:37:22.804541 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 5.99s 2026-03-07 00:37:22.804554 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.58s 2026-03-07 00:37:22.804563 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.43s 2026-03-07 00:37:22.804572 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.10s 2026-03-07 00:37:22.804581 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.89s 2026-03-07 00:37:22.804590 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.88s 2026-03-07 00:37:22.804598 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.62s 2026-03-07 00:37:22.804607 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.52s 2026-03-07 00:37:22.804617 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.42s 2026-03-07 00:37:22.804631 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.41s 2026-03-07 00:37:22.804647 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.41s 2026-03-07 00:37:23.050548 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-03-07 00:37:23.086390 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-03-07 00:37:23.086509 | orchestrator | Dload Upload Total Spent Left Speed 2026-03-07 00:37:23.161510 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 190 0 --:--:-- --:--:-- --:--:-- 191 2026-03-07 00:37:23.172061 | orchestrator | + osism apply --environment custom workarounds 2026-03-07 00:37:25.035217 | orchestrator | 2026-03-07 00:37:25 | INFO  | Trying to run play workarounds in environment custom 2026-03-07 00:37:35.163275 | orchestrator | 2026-03-07 00:37:35 | INFO  | Task f6647ea2-ccc8-40c8-9cbb-5c94629cc83a (workarounds) was prepared for execution. 2026-03-07 00:37:35.163417 | orchestrator | 2026-03-07 00:37:35 | INFO  | It takes a moment until task f6647ea2-ccc8-40c8-9cbb-5c94629cc83a (workarounds) has been started and output is visible here. 2026-03-07 00:37:59.382553 | orchestrator | 2026-03-07 00:37:59.382690 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-07 00:37:59.382716 | orchestrator | 2026-03-07 00:37:59.382735 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-03-07 00:37:59.382755 | orchestrator | Saturday 07 March 2026 00:37:39 +0000 (0:00:00.096) 0:00:00.096 ******** 2026-03-07 00:37:59.382775 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-03-07 00:37:59.382792 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-03-07 00:37:59.382808 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-03-07 00:37:59.382827 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-03-07 00:37:59.382846 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-03-07 00:37:59.382864 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-03-07 00:37:59.382884 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-03-07 00:37:59.382903 | orchestrator | 2026-03-07 00:37:59.382923 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-03-07 00:37:59.382941 | orchestrator | 2026-03-07 00:37:59.382960 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-07 00:37:59.382981 | orchestrator | Saturday 07 March 2026 00:37:39 +0000 (0:00:00.644) 0:00:00.741 ******** 2026-03-07 00:37:59.383000 | orchestrator | ok: [testbed-manager] 2026-03-07 00:37:59.383020 | orchestrator | 2026-03-07 00:37:59.383059 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-03-07 00:37:59.383072 | orchestrator | 2026-03-07 00:37:59.383084 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-07 00:37:59.383097 | orchestrator | Saturday 07 March 2026 00:37:41 +0000 (0:00:02.134) 0:00:02.876 ******** 2026-03-07 00:37:59.383110 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:37:59.383122 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:37:59.383134 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:37:59.383145 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:37:59.383158 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:37:59.383170 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:37:59.383183 | orchestrator | 2026-03-07 00:37:59.383196 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-03-07 00:37:59.383243 | orchestrator | 2026-03-07 00:37:59.383256 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-03-07 00:37:59.383286 | orchestrator | Saturday 07 March 2026 00:37:43 +0000 (0:00:02.032) 0:00:04.908 ******** 2026-03-07 00:37:59.383300 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-07 00:37:59.383314 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-07 00:37:59.383327 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-07 00:37:59.383339 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-07 00:37:59.383352 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-07 00:37:59.383365 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-07 00:37:59.383378 | orchestrator | 2026-03-07 00:37:59.383390 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-03-07 00:37:59.383403 | orchestrator | Saturday 07 March 2026 00:37:45 +0000 (0:00:01.491) 0:00:06.400 ******** 2026-03-07 00:37:59.383416 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:37:59.383429 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:37:59.383455 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:37:59.383476 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:37:59.383488 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:37:59.383498 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:37:59.383509 | orchestrator | 2026-03-07 00:37:59.383520 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-03-07 00:37:59.383531 | orchestrator | Saturday 07 March 2026 00:37:49 +0000 (0:00:03.850) 0:00:10.250 ******** 2026-03-07 00:37:59.383541 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:37:59.383552 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:37:59.383564 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:37:59.383574 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:37:59.383585 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:37:59.383595 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:37:59.383606 | orchestrator | 2026-03-07 00:37:59.383617 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-03-07 00:37:59.383628 | orchestrator | 2026-03-07 00:37:59.383638 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-03-07 00:37:59.383649 | orchestrator | Saturday 07 March 2026 00:37:49 +0000 (0:00:00.634) 0:00:10.885 ******** 2026-03-07 00:37:59.383660 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:37:59.383671 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:37:59.383681 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:37:59.383692 | orchestrator | changed: [testbed-manager] 2026-03-07 00:37:59.383703 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:37:59.383713 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:37:59.383724 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:37:59.383743 | orchestrator | 2026-03-07 00:37:59.383754 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-03-07 00:37:59.383765 | orchestrator | Saturday 07 March 2026 00:37:51 +0000 (0:00:01.622) 0:00:12.508 ******** 2026-03-07 00:37:59.383776 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:37:59.383787 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:37:59.383797 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:37:59.383808 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:37:59.383819 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:37:59.383829 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:37:59.383862 | orchestrator | changed: [testbed-manager] 2026-03-07 00:37:59.383873 | orchestrator | 2026-03-07 00:37:59.383884 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-03-07 00:37:59.383895 | orchestrator | Saturday 07 March 2026 00:37:53 +0000 (0:00:01.509) 0:00:14.017 ******** 2026-03-07 00:37:59.383906 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:37:59.383916 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:37:59.383927 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:37:59.383938 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:37:59.383949 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:37:59.383959 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:37:59.383970 | orchestrator | ok: [testbed-manager] 2026-03-07 00:37:59.383981 | orchestrator | 2026-03-07 00:37:59.383991 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-03-07 00:37:59.384002 | orchestrator | Saturday 07 March 2026 00:37:54 +0000 (0:00:01.503) 0:00:15.521 ******** 2026-03-07 00:37:59.384013 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:37:59.384024 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:37:59.384034 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:37:59.384045 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:37:59.384056 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:37:59.384066 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:37:59.384077 | orchestrator | changed: [testbed-manager] 2026-03-07 00:37:59.384088 | orchestrator | 2026-03-07 00:37:59.384098 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-03-07 00:37:59.384109 | orchestrator | Saturday 07 March 2026 00:37:56 +0000 (0:00:01.630) 0:00:17.152 ******** 2026-03-07 00:37:59.384120 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:37:59.384131 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:37:59.384141 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:37:59.384154 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:37:59.384171 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:37:59.384188 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:37:59.384249 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:37:59.384268 | orchestrator | 2026-03-07 00:37:59.384280 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-03-07 00:37:59.384290 | orchestrator | 2026-03-07 00:37:59.384301 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-03-07 00:37:59.384312 | orchestrator | Saturday 07 March 2026 00:37:56 +0000 (0:00:00.517) 0:00:17.669 ******** 2026-03-07 00:37:59.384323 | orchestrator | ok: [testbed-manager] 2026-03-07 00:37:59.384333 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:37:59.384344 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:37:59.384355 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:37:59.384365 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:37:59.384376 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:37:59.384394 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:37:59.384405 | orchestrator | 2026-03-07 00:37:59.384416 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:37:59.384428 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-07 00:37:59.384440 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:37:59.384459 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:37:59.384470 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:37:59.384481 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:37:59.384491 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:37:59.384502 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:37:59.384513 | orchestrator | 2026-03-07 00:37:59.384524 | orchestrator | 2026-03-07 00:37:59.384534 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:37:59.384545 | orchestrator | Saturday 07 March 2026 00:37:59 +0000 (0:00:02.694) 0:00:20.364 ******** 2026-03-07 00:37:59.384555 | orchestrator | =============================================================================== 2026-03-07 00:37:59.384566 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.85s 2026-03-07 00:37:59.384577 | orchestrator | Install python3-docker -------------------------------------------------- 2.69s 2026-03-07 00:37:59.384587 | orchestrator | Apply netplan configuration --------------------------------------------- 2.13s 2026-03-07 00:37:59.384598 | orchestrator | Apply netplan configuration --------------------------------------------- 2.03s 2026-03-07 00:37:59.384608 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.63s 2026-03-07 00:37:59.384619 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.62s 2026-03-07 00:37:59.384630 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.51s 2026-03-07 00:37:59.384640 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.50s 2026-03-07 00:37:59.384651 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.49s 2026-03-07 00:37:59.384661 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.65s 2026-03-07 00:37:59.384672 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.63s 2026-03-07 00:37:59.384691 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.52s 2026-03-07 00:37:59.752880 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-03-07 00:38:11.637730 | orchestrator | 2026-03-07 00:38:11 | INFO  | Task 5115e0cd-3789-4b3f-ad28-bb59b9e93254 (reboot) was prepared for execution. 2026-03-07 00:38:11.637863 | orchestrator | 2026-03-07 00:38:11 | INFO  | It takes a moment until task 5115e0cd-3789-4b3f-ad28-bb59b9e93254 (reboot) has been started and output is visible here. 2026-03-07 00:38:21.145267 | orchestrator | 2026-03-07 00:38:21.145392 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-07 00:38:21.145409 | orchestrator | 2026-03-07 00:38:21.145421 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-07 00:38:21.145433 | orchestrator | Saturday 07 March 2026 00:38:15 +0000 (0:00:00.149) 0:00:00.149 ******** 2026-03-07 00:38:21.145445 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:38:21.145457 | orchestrator | 2026-03-07 00:38:21.145468 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-07 00:38:21.145479 | orchestrator | Saturday 07 March 2026 00:38:15 +0000 (0:00:00.082) 0:00:00.231 ******** 2026-03-07 00:38:21.145490 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:38:21.145501 | orchestrator | 2026-03-07 00:38:21.145512 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-07 00:38:21.145555 | orchestrator | Saturday 07 March 2026 00:38:16 +0000 (0:00:00.889) 0:00:01.120 ******** 2026-03-07 00:38:21.145567 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:38:21.145578 | orchestrator | 2026-03-07 00:38:21.145588 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-07 00:38:21.145599 | orchestrator | 2026-03-07 00:38:21.145610 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-07 00:38:21.145621 | orchestrator | Saturday 07 March 2026 00:38:16 +0000 (0:00:00.113) 0:00:01.234 ******** 2026-03-07 00:38:21.145632 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:38:21.145642 | orchestrator | 2026-03-07 00:38:21.145653 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-07 00:38:21.145664 | orchestrator | Saturday 07 March 2026 00:38:16 +0000 (0:00:00.089) 0:00:01.324 ******** 2026-03-07 00:38:21.145675 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:38:21.145685 | orchestrator | 2026-03-07 00:38:21.145696 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-07 00:38:21.145724 | orchestrator | Saturday 07 March 2026 00:38:17 +0000 (0:00:00.661) 0:00:01.985 ******** 2026-03-07 00:38:21.145735 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:38:21.145746 | orchestrator | 2026-03-07 00:38:21.145760 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-07 00:38:21.145772 | orchestrator | 2026-03-07 00:38:21.145784 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-07 00:38:21.145797 | orchestrator | Saturday 07 March 2026 00:38:17 +0000 (0:00:00.096) 0:00:02.081 ******** 2026-03-07 00:38:21.145809 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:38:21.145823 | orchestrator | 2026-03-07 00:38:21.145835 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-07 00:38:21.145848 | orchestrator | Saturday 07 March 2026 00:38:17 +0000 (0:00:00.161) 0:00:02.243 ******** 2026-03-07 00:38:21.145859 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:38:21.145869 | orchestrator | 2026-03-07 00:38:21.145881 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-07 00:38:21.145892 | orchestrator | Saturday 07 March 2026 00:38:18 +0000 (0:00:00.670) 0:00:02.914 ******** 2026-03-07 00:38:21.145903 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:38:21.145913 | orchestrator | 2026-03-07 00:38:21.145924 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-07 00:38:21.145935 | orchestrator | 2026-03-07 00:38:21.145946 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-07 00:38:21.145956 | orchestrator | Saturday 07 March 2026 00:38:18 +0000 (0:00:00.110) 0:00:03.025 ******** 2026-03-07 00:38:21.145967 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:38:21.145978 | orchestrator | 2026-03-07 00:38:21.145988 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-07 00:38:21.145999 | orchestrator | Saturday 07 March 2026 00:38:18 +0000 (0:00:00.101) 0:00:03.126 ******** 2026-03-07 00:38:21.146010 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:38:21.146090 | orchestrator | 2026-03-07 00:38:21.146102 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-07 00:38:21.146113 | orchestrator | Saturday 07 March 2026 00:38:18 +0000 (0:00:00.641) 0:00:03.768 ******** 2026-03-07 00:38:21.146124 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:38:21.146135 | orchestrator | 2026-03-07 00:38:21.146146 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-07 00:38:21.146157 | orchestrator | 2026-03-07 00:38:21.146187 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-07 00:38:21.146199 | orchestrator | Saturday 07 March 2026 00:38:19 +0000 (0:00:00.095) 0:00:03.863 ******** 2026-03-07 00:38:21.146209 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:38:21.146220 | orchestrator | 2026-03-07 00:38:21.146231 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-07 00:38:21.146242 | orchestrator | Saturday 07 March 2026 00:38:19 +0000 (0:00:00.087) 0:00:03.950 ******** 2026-03-07 00:38:21.146262 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:38:21.146274 | orchestrator | 2026-03-07 00:38:21.146284 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-07 00:38:21.146295 | orchestrator | Saturday 07 March 2026 00:38:19 +0000 (0:00:00.673) 0:00:04.624 ******** 2026-03-07 00:38:21.146306 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:38:21.146317 | orchestrator | 2026-03-07 00:38:21.146329 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-07 00:38:21.146340 | orchestrator | 2026-03-07 00:38:21.146351 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-07 00:38:21.146362 | orchestrator | Saturday 07 March 2026 00:38:19 +0000 (0:00:00.135) 0:00:04.760 ******** 2026-03-07 00:38:21.146372 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:38:21.146384 | orchestrator | 2026-03-07 00:38:21.146395 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-07 00:38:21.146406 | orchestrator | Saturday 07 March 2026 00:38:20 +0000 (0:00:00.088) 0:00:04.849 ******** 2026-03-07 00:38:21.146416 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:38:21.146428 | orchestrator | 2026-03-07 00:38:21.146439 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-07 00:38:21.146450 | orchestrator | Saturday 07 March 2026 00:38:20 +0000 (0:00:00.730) 0:00:05.579 ******** 2026-03-07 00:38:21.146481 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:38:21.146493 | orchestrator | 2026-03-07 00:38:21.146504 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:38:21.146516 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:38:21.146528 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:38:21.146539 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:38:21.146550 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:38:21.146561 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:38:21.146572 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:38:21.146583 | orchestrator | 2026-03-07 00:38:21.146594 | orchestrator | 2026-03-07 00:38:21.146605 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:38:21.146616 | orchestrator | Saturday 07 March 2026 00:38:20 +0000 (0:00:00.038) 0:00:05.618 ******** 2026-03-07 00:38:21.146634 | orchestrator | =============================================================================== 2026-03-07 00:38:21.146645 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.27s 2026-03-07 00:38:21.146656 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.61s 2026-03-07 00:38:21.146667 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.59s 2026-03-07 00:38:21.471367 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-03-07 00:38:33.640183 | orchestrator | 2026-03-07 00:38:33 | INFO  | Task b74004a0-7415-4d94-9911-5c8c2423ecd9 (wait-for-connection) was prepared for execution. 2026-03-07 00:38:33.640270 | orchestrator | 2026-03-07 00:38:33 | INFO  | It takes a moment until task b74004a0-7415-4d94-9911-5c8c2423ecd9 (wait-for-connection) has been started and output is visible here. 2026-03-07 00:38:49.646590 | orchestrator | 2026-03-07 00:38:49.646734 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-03-07 00:38:49.646752 | orchestrator | 2026-03-07 00:38:49.646764 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-03-07 00:38:49.646776 | orchestrator | Saturday 07 March 2026 00:38:37 +0000 (0:00:00.228) 0:00:00.229 ******** 2026-03-07 00:38:49.646787 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:38:49.646800 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:38:49.646811 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:38:49.646822 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:38:49.646833 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:38:49.646844 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:38:49.646855 | orchestrator | 2026-03-07 00:38:49.646866 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:38:49.646878 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:38:49.646891 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:38:49.646902 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:38:49.646913 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:38:49.646924 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:38:49.646935 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:38:49.646946 | orchestrator | 2026-03-07 00:38:49.646958 | orchestrator | 2026-03-07 00:38:49.646969 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:38:49.646981 | orchestrator | Saturday 07 March 2026 00:38:49 +0000 (0:00:11.565) 0:00:11.794 ******** 2026-03-07 00:38:49.646992 | orchestrator | =============================================================================== 2026-03-07 00:38:49.647003 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.57s 2026-03-07 00:38:49.951305 | orchestrator | + osism apply hddtemp 2026-03-07 00:39:02.003183 | orchestrator | 2026-03-07 00:39:02 | INFO  | Task 3d9b8840-a706-4e43-801c-3c803acf3dd9 (hddtemp) was prepared for execution. 2026-03-07 00:39:02.003292 | orchestrator | 2026-03-07 00:39:02 | INFO  | It takes a moment until task 3d9b8840-a706-4e43-801c-3c803acf3dd9 (hddtemp) has been started and output is visible here. 2026-03-07 00:39:30.457430 | orchestrator | 2026-03-07 00:39:30.457569 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-03-07 00:39:30.457598 | orchestrator | 2026-03-07 00:39:30.457617 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-03-07 00:39:30.457635 | orchestrator | Saturday 07 March 2026 00:39:06 +0000 (0:00:00.240) 0:00:00.240 ******** 2026-03-07 00:39:30.457654 | orchestrator | ok: [testbed-manager] 2026-03-07 00:39:30.457673 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:39:30.457692 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:39:30.457711 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:39:30.457730 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:39:30.457745 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:39:30.457757 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:39:30.457768 | orchestrator | 2026-03-07 00:39:30.457779 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-03-07 00:39:30.457790 | orchestrator | Saturday 07 March 2026 00:39:06 +0000 (0:00:00.675) 0:00:00.915 ******** 2026-03-07 00:39:30.457803 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:39:30.457850 | orchestrator | 2026-03-07 00:39:30.457862 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-03-07 00:39:30.457873 | orchestrator | Saturday 07 March 2026 00:39:07 +0000 (0:00:01.153) 0:00:02.068 ******** 2026-03-07 00:39:30.457883 | orchestrator | ok: [testbed-manager] 2026-03-07 00:39:30.457894 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:39:30.457905 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:39:30.457916 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:39:30.457926 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:39:30.457938 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:39:30.457948 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:39:30.457959 | orchestrator | 2026-03-07 00:39:30.457970 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-03-07 00:39:30.457995 | orchestrator | Saturday 07 March 2026 00:39:09 +0000 (0:00:01.952) 0:00:04.021 ******** 2026-03-07 00:39:30.458007 | orchestrator | changed: [testbed-manager] 2026-03-07 00:39:30.458084 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:39:30.458096 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:39:30.458107 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:39:30.458118 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:39:30.458129 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:39:30.458140 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:39:30.458150 | orchestrator | 2026-03-07 00:39:30.458161 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-03-07 00:39:30.458172 | orchestrator | Saturday 07 March 2026 00:39:11 +0000 (0:00:01.158) 0:00:05.179 ******** 2026-03-07 00:39:30.458183 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:39:30.458194 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:39:30.458204 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:39:30.458215 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:39:30.458226 | orchestrator | ok: [testbed-manager] 2026-03-07 00:39:30.458301 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:39:30.458312 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:39:30.458323 | orchestrator | 2026-03-07 00:39:30.458334 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-03-07 00:39:30.458345 | orchestrator | Saturday 07 March 2026 00:39:12 +0000 (0:00:01.128) 0:00:06.308 ******** 2026-03-07 00:39:30.458356 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:39:30.458367 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:39:30.458377 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:39:30.458388 | orchestrator | changed: [testbed-manager] 2026-03-07 00:39:30.458399 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:39:30.458410 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:39:30.458421 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:39:30.458431 | orchestrator | 2026-03-07 00:39:30.458442 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-03-07 00:39:30.458453 | orchestrator | Saturday 07 March 2026 00:39:12 +0000 (0:00:00.726) 0:00:07.034 ******** 2026-03-07 00:39:30.458464 | orchestrator | changed: [testbed-manager] 2026-03-07 00:39:30.458475 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:39:30.458486 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:39:30.458496 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:39:30.458507 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:39:30.458517 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:39:30.458528 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:39:30.458539 | orchestrator | 2026-03-07 00:39:30.458550 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-03-07 00:39:30.458561 | orchestrator | Saturday 07 March 2026 00:39:26 +0000 (0:00:13.913) 0:00:20.948 ******** 2026-03-07 00:39:30.458572 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:39:30.458584 | orchestrator | 2026-03-07 00:39:30.458605 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-03-07 00:39:30.458616 | orchestrator | Saturday 07 March 2026 00:39:28 +0000 (0:00:01.330) 0:00:22.279 ******** 2026-03-07 00:39:30.458627 | orchestrator | changed: [testbed-manager] 2026-03-07 00:39:30.458637 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:39:30.458648 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:39:30.458659 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:39:30.458670 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:39:30.458681 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:39:30.458692 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:39:30.458703 | orchestrator | 2026-03-07 00:39:30.458713 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:39:30.458725 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:39:30.458760 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-07 00:39:30.458773 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-07 00:39:30.458784 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-07 00:39:30.458795 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-07 00:39:30.458806 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-07 00:39:30.458816 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-07 00:39:30.458827 | orchestrator | 2026-03-07 00:39:30.458857 | orchestrator | 2026-03-07 00:39:30.458868 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:39:30.458889 | orchestrator | Saturday 07 March 2026 00:39:30 +0000 (0:00:01.908) 0:00:24.187 ******** 2026-03-07 00:39:30.458929 | orchestrator | =============================================================================== 2026-03-07 00:39:30.458941 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.91s 2026-03-07 00:39:30.458952 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.95s 2026-03-07 00:39:30.458963 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.91s 2026-03-07 00:39:30.458980 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.33s 2026-03-07 00:39:30.458991 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.16s 2026-03-07 00:39:30.459002 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.15s 2026-03-07 00:39:30.459013 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.13s 2026-03-07 00:39:30.459024 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.73s 2026-03-07 00:39:30.459035 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.68s 2026-03-07 00:39:30.771414 | orchestrator | ++ semver 9.5.0 7.1.1 2026-03-07 00:39:30.817771 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-07 00:39:30.817880 | orchestrator | + sudo systemctl restart manager.service 2026-03-07 00:39:44.297747 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-07 00:39:44.297855 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-07 00:39:44.297873 | orchestrator | + local max_attempts=60 2026-03-07 00:39:44.297886 | orchestrator | + local name=ceph-ansible 2026-03-07 00:39:44.297896 | orchestrator | + local attempt_num=1 2026-03-07 00:39:44.297907 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-07 00:39:44.333113 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-07 00:39:44.333208 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-07 00:39:44.333222 | orchestrator | + sleep 5 2026-03-07 00:39:49.337172 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-07 00:39:49.370232 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-07 00:39:49.370327 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-07 00:39:49.370341 | orchestrator | + sleep 5 2026-03-07 00:39:54.373726 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-07 00:39:54.408964 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-07 00:39:54.409033 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-07 00:39:54.409038 | orchestrator | + sleep 5 2026-03-07 00:39:59.413349 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-07 00:39:59.455468 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-07 00:39:59.455550 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-07 00:39:59.455557 | orchestrator | + sleep 5 2026-03-07 00:40:04.459788 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-07 00:40:04.502249 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-07 00:40:04.502329 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-07 00:40:04.502338 | orchestrator | + sleep 5 2026-03-07 00:40:09.507759 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-07 00:40:09.558084 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-07 00:40:09.558171 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-07 00:40:09.558187 | orchestrator | + sleep 5 2026-03-07 00:40:14.570954 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-07 00:40:14.609586 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-07 00:40:14.609701 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-07 00:40:14.609723 | orchestrator | + sleep 5 2026-03-07 00:40:19.614963 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-07 00:40:19.685773 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-07 00:40:19.685872 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-07 00:40:19.685888 | orchestrator | + sleep 5 2026-03-07 00:40:24.688707 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-07 00:40:24.724705 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-07 00:40:24.724843 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-07 00:40:24.724869 | orchestrator | + sleep 5 2026-03-07 00:40:29.727593 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-07 00:40:29.765375 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-07 00:40:29.765497 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-07 00:40:29.765525 | orchestrator | + sleep 5 2026-03-07 00:40:34.770091 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-07 00:40:34.803754 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-07 00:40:34.803931 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-07 00:40:34.803958 | orchestrator | + sleep 5 2026-03-07 00:40:39.808253 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-07 00:40:39.845789 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-07 00:40:39.845873 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-07 00:40:39.845896 | orchestrator | + sleep 5 2026-03-07 00:40:44.850780 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-07 00:40:44.882831 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-07 00:40:44.882941 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-07 00:40:44.883017 | orchestrator | + sleep 5 2026-03-07 00:40:49.888819 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-07 00:40:49.927716 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-07 00:40:49.927824 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-07 00:40:49.927842 | orchestrator | + local max_attempts=60 2026-03-07 00:40:49.927855 | orchestrator | + local name=kolla-ansible 2026-03-07 00:40:49.927867 | orchestrator | + local attempt_num=1 2026-03-07 00:40:49.927879 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-07 00:40:49.958784 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-07 00:40:49.958878 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-07 00:40:49.958892 | orchestrator | + local max_attempts=60 2026-03-07 00:40:49.958936 | orchestrator | + local name=osism-ansible 2026-03-07 00:40:49.958948 | orchestrator | + local attempt_num=1 2026-03-07 00:40:49.959955 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-07 00:40:49.995169 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-07 00:40:49.995263 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-07 00:40:49.995278 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-07 00:40:50.167334 | orchestrator | ARA in ceph-ansible already disabled. 2026-03-07 00:40:50.310542 | orchestrator | ARA in kolla-ansible already disabled. 2026-03-07 00:40:50.470158 | orchestrator | ARA in osism-ansible already disabled. 2026-03-07 00:40:50.615900 | orchestrator | ARA in osism-kubernetes already disabled. 2026-03-07 00:40:50.616336 | orchestrator | + osism apply gather-facts 2026-03-07 00:41:02.761263 | orchestrator | 2026-03-07 00:41:02 | INFO  | Task 7f49aed7-7006-4e45-a406-2c84f5f1182e (gather-facts) was prepared for execution. 2026-03-07 00:41:02.761378 | orchestrator | 2026-03-07 00:41:02 | INFO  | It takes a moment until task 7f49aed7-7006-4e45-a406-2c84f5f1182e (gather-facts) has been started and output is visible here. 2026-03-07 00:41:17.014626 | orchestrator | 2026-03-07 00:41:17.014741 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-07 00:41:17.014757 | orchestrator | 2026-03-07 00:41:17.014770 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-07 00:41:17.014781 | orchestrator | Saturday 07 March 2026 00:41:06 +0000 (0:00:00.209) 0:00:00.209 ******** 2026-03-07 00:41:17.014793 | orchestrator | ok: [testbed-manager] 2026-03-07 00:41:17.014805 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:41:17.014817 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:41:17.014828 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:41:17.014839 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:41:17.014850 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:41:17.014861 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:41:17.014872 | orchestrator | 2026-03-07 00:41:17.014883 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-07 00:41:17.014894 | orchestrator | 2026-03-07 00:41:17.014905 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-07 00:41:17.014916 | orchestrator | Saturday 07 March 2026 00:41:16 +0000 (0:00:09.224) 0:00:09.433 ******** 2026-03-07 00:41:17.014927 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:41:17.014939 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:41:17.014950 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:41:17.014961 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:41:17.014972 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:41:17.014982 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:41:17.014993 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:41:17.015004 | orchestrator | 2026-03-07 00:41:17.015015 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:41:17.015026 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-07 00:41:17.015039 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-07 00:41:17.015050 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-07 00:41:17.015061 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-07 00:41:17.015072 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-07 00:41:17.015083 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-07 00:41:17.015094 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-07 00:41:17.015132 | orchestrator | 2026-03-07 00:41:17.015144 | orchestrator | 2026-03-07 00:41:17.015155 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:41:17.015168 | orchestrator | Saturday 07 March 2026 00:41:16 +0000 (0:00:00.535) 0:00:09.969 ******** 2026-03-07 00:41:17.015180 | orchestrator | =============================================================================== 2026-03-07 00:41:17.015194 | orchestrator | Gathers facts about hosts ----------------------------------------------- 9.22s 2026-03-07 00:41:17.015208 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.54s 2026-03-07 00:41:17.278385 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-03-07 00:41:17.289555 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-03-07 00:41:17.309075 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-03-07 00:41:17.328483 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-03-07 00:41:17.348414 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-03-07 00:41:17.359514 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-03-07 00:41:17.370202 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-03-07 00:41:17.380632 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-03-07 00:41:17.391640 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-03-07 00:41:17.401451 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-03-07 00:41:17.411902 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-03-07 00:41:17.422700 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-03-07 00:41:17.441555 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-03-07 00:41:17.461351 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-03-07 00:41:17.479571 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-03-07 00:41:17.498751 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-03-07 00:41:17.515128 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-03-07 00:41:17.528529 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-03-07 00:41:17.548422 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-03-07 00:41:17.562701 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-03-07 00:41:17.574677 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-03-07 00:41:17.585332 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-03-07 00:41:17.597319 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-03-07 00:41:17.614324 | orchestrator | + [[ false == \t\r\u\e ]] 2026-03-07 00:41:17.767773 | orchestrator | ok: Runtime: 0:23:48.724605 2026-03-07 00:41:17.876328 | 2026-03-07 00:41:17.876472 | TASK [Deploy services] 2026-03-07 00:41:18.411889 | orchestrator | skipping: Conditional result was False 2026-03-07 00:41:18.429016 | 2026-03-07 00:41:18.429219 | TASK [Deploy in a nutshell] 2026-03-07 00:41:19.155479 | orchestrator | + set -e 2026-03-07 00:41:19.156971 | orchestrator | 2026-03-07 00:41:19.157012 | orchestrator | # PULL IMAGES 2026-03-07 00:41:19.157027 | orchestrator | 2026-03-07 00:41:19.157046 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-07 00:41:19.157067 | orchestrator | ++ export INTERACTIVE=false 2026-03-07 00:41:19.157082 | orchestrator | ++ INTERACTIVE=false 2026-03-07 00:41:19.157126 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-07 00:41:19.157150 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-07 00:41:19.157164 | orchestrator | + source /opt/manager-vars.sh 2026-03-07 00:41:19.157184 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-07 00:41:19.157211 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-07 00:41:19.157229 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-07 00:41:19.157298 | orchestrator | ++ CEPH_VERSION=reef 2026-03-07 00:41:19.157313 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-07 00:41:19.157332 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-07 00:41:19.157343 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-07 00:41:19.157358 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-07 00:41:19.157370 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-07 00:41:19.157382 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-07 00:41:19.157393 | orchestrator | ++ export ARA=false 2026-03-07 00:41:19.157404 | orchestrator | ++ ARA=false 2026-03-07 00:41:19.157415 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-07 00:41:19.157426 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-07 00:41:19.157437 | orchestrator | ++ export TEMPEST=true 2026-03-07 00:41:19.157448 | orchestrator | ++ TEMPEST=true 2026-03-07 00:41:19.157458 | orchestrator | ++ export IS_ZUUL=true 2026-03-07 00:41:19.157469 | orchestrator | ++ IS_ZUUL=true 2026-03-07 00:41:19.157480 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.18 2026-03-07 00:41:19.157492 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.18 2026-03-07 00:41:19.157503 | orchestrator | ++ export EXTERNAL_API=false 2026-03-07 00:41:19.157514 | orchestrator | ++ EXTERNAL_API=false 2026-03-07 00:41:19.157524 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-07 00:41:19.157536 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-07 00:41:19.157547 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-07 00:41:19.157557 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-07 00:41:19.157568 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-07 00:41:19.157588 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-07 00:41:19.157599 | orchestrator | + echo 2026-03-07 00:41:19.157610 | orchestrator | + echo '# PULL IMAGES' 2026-03-07 00:41:19.157621 | orchestrator | + echo 2026-03-07 00:41:19.157639 | orchestrator | ++ semver 9.5.0 7.0.0 2026-03-07 00:41:19.217364 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-07 00:41:19.217502 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-03-07 00:41:21.089185 | orchestrator | 2026-03-07 00:41:21 | INFO  | Trying to run play pull-images in environment custom 2026-03-07 00:41:31.171502 | orchestrator | 2026-03-07 00:41:31 | INFO  | Task 9bb7cbd3-45d7-4a1c-86e4-bf41d3e9be6f (pull-images) was prepared for execution. 2026-03-07 00:41:31.171634 | orchestrator | 2026-03-07 00:41:31 | INFO  | Task 9bb7cbd3-45d7-4a1c-86e4-bf41d3e9be6f is running in background. No more output. Check ARA for logs. 2026-03-07 00:41:33.462669 | orchestrator | 2026-03-07 00:41:33 | INFO  | Trying to run play wipe-partitions in environment custom 2026-03-07 00:41:43.604001 | orchestrator | 2026-03-07 00:41:43 | INFO  | Task f7b5a3e8-32d8-40d5-ab5c-56d4aac645e8 (wipe-partitions) was prepared for execution. 2026-03-07 00:41:43.604084 | orchestrator | 2026-03-07 00:41:43 | INFO  | It takes a moment until task f7b5a3e8-32d8-40d5-ab5c-56d4aac645e8 (wipe-partitions) has been started and output is visible here. 2026-03-07 00:41:55.709379 | orchestrator | 2026-03-07 00:41:55.709529 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-03-07 00:41:55.709630 | orchestrator | 2026-03-07 00:41:55.709645 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-03-07 00:41:55.709663 | orchestrator | Saturday 07 March 2026 00:41:47 +0000 (0:00:00.120) 0:00:00.120 ******** 2026-03-07 00:41:55.709674 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:41:55.709687 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:41:55.709699 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:41:55.709710 | orchestrator | 2026-03-07 00:41:55.709722 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-03-07 00:41:55.709765 | orchestrator | Saturday 07 March 2026 00:41:48 +0000 (0:00:00.565) 0:00:00.686 ******** 2026-03-07 00:41:55.709777 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:41:55.709788 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:41:55.709799 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:41:55.709815 | orchestrator | 2026-03-07 00:41:55.709827 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-03-07 00:41:55.709838 | orchestrator | Saturday 07 March 2026 00:41:48 +0000 (0:00:00.307) 0:00:00.993 ******** 2026-03-07 00:41:55.709849 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:41:55.709864 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:41:55.709876 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:41:55.709888 | orchestrator | 2026-03-07 00:41:55.709900 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-03-07 00:41:55.709913 | orchestrator | Saturday 07 March 2026 00:41:49 +0000 (0:00:00.567) 0:00:01.560 ******** 2026-03-07 00:41:55.709926 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:41:55.709938 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:41:55.709950 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:41:55.709962 | orchestrator | 2026-03-07 00:41:55.709974 | orchestrator | TASK [Check device availability] *********************************************** 2026-03-07 00:41:55.709987 | orchestrator | Saturday 07 March 2026 00:41:49 +0000 (0:00:00.228) 0:00:01.788 ******** 2026-03-07 00:41:55.710000 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-07 00:41:55.710076 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-07 00:41:55.710091 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-07 00:41:55.710103 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-07 00:41:55.710116 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-07 00:41:55.710128 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-07 00:41:55.710139 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-07 00:41:55.710152 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-07 00:41:55.710165 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-07 00:41:55.710177 | orchestrator | 2026-03-07 00:41:55.710190 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-03-07 00:41:55.710201 | orchestrator | Saturday 07 March 2026 00:41:50 +0000 (0:00:01.202) 0:00:02.991 ******** 2026-03-07 00:41:55.710213 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-03-07 00:41:55.710224 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-03-07 00:41:55.710235 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-03-07 00:41:55.710246 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-03-07 00:41:55.710257 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-03-07 00:41:55.710267 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-03-07 00:41:55.710278 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-03-07 00:41:55.710288 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-03-07 00:41:55.710299 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-03-07 00:41:55.710310 | orchestrator | 2026-03-07 00:41:55.710321 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-03-07 00:41:55.710332 | orchestrator | Saturday 07 March 2026 00:41:52 +0000 (0:00:01.604) 0:00:04.595 ******** 2026-03-07 00:41:55.710343 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-07 00:41:55.710353 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-07 00:41:55.710364 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-07 00:41:55.710374 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-07 00:41:55.710385 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-07 00:41:55.710396 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-07 00:41:55.710406 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-07 00:41:55.710425 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-07 00:41:55.710445 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-07 00:41:55.710456 | orchestrator | 2026-03-07 00:41:55.710467 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-03-07 00:41:55.710478 | orchestrator | Saturday 07 March 2026 00:41:54 +0000 (0:00:02.144) 0:00:06.740 ******** 2026-03-07 00:41:55.710488 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:41:55.710499 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:41:55.710510 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:41:55.710520 | orchestrator | 2026-03-07 00:41:55.710531 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-03-07 00:41:55.710542 | orchestrator | Saturday 07 March 2026 00:41:54 +0000 (0:00:00.610) 0:00:07.351 ******** 2026-03-07 00:41:55.710576 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:41:55.710587 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:41:55.710598 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:41:55.710608 | orchestrator | 2026-03-07 00:41:55.710620 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:41:55.710632 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:41:55.710646 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:41:55.710679 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:41:55.710691 | orchestrator | 2026-03-07 00:41:55.710702 | orchestrator | 2026-03-07 00:41:55.710713 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:41:55.710724 | orchestrator | Saturday 07 March 2026 00:41:55 +0000 (0:00:00.646) 0:00:07.997 ******** 2026-03-07 00:41:55.710735 | orchestrator | =============================================================================== 2026-03-07 00:41:55.710746 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.14s 2026-03-07 00:41:55.710756 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.60s 2026-03-07 00:41:55.710767 | orchestrator | Check device availability ----------------------------------------------- 1.20s 2026-03-07 00:41:55.710778 | orchestrator | Request device events from the kernel ----------------------------------- 0.65s 2026-03-07 00:41:55.710788 | orchestrator | Reload udev rules ------------------------------------------------------- 0.61s 2026-03-07 00:41:55.710799 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.57s 2026-03-07 00:41:55.710810 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.57s 2026-03-07 00:41:55.710821 | orchestrator | Remove all rook related logical devices --------------------------------- 0.31s 2026-03-07 00:41:55.710832 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.23s 2026-03-07 00:42:07.815473 | orchestrator | 2026-03-07 00:42:07 | INFO  | Task a471d526-1306-4553-9ce4-0763654564ce (facts) was prepared for execution. 2026-03-07 00:42:07.815572 | orchestrator | 2026-03-07 00:42:07 | INFO  | It takes a moment until task a471d526-1306-4553-9ce4-0763654564ce (facts) has been started and output is visible here. 2026-03-07 00:42:20.588657 | orchestrator | 2026-03-07 00:42:20.588917 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-07 00:42:20.588933 | orchestrator | 2026-03-07 00:42:20.588941 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-07 00:42:20.588949 | orchestrator | Saturday 07 March 2026 00:42:11 +0000 (0:00:00.249) 0:00:00.249 ******** 2026-03-07 00:42:20.588957 | orchestrator | ok: [testbed-manager] 2026-03-07 00:42:20.588965 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:42:20.588972 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:42:20.588978 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:42:20.589008 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:42:20.589015 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:42:20.589021 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:42:20.589028 | orchestrator | 2026-03-07 00:42:20.589035 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-07 00:42:20.589042 | orchestrator | Saturday 07 March 2026 00:42:12 +0000 (0:00:01.031) 0:00:01.281 ******** 2026-03-07 00:42:20.589049 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:42:20.589056 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:42:20.589065 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:42:20.589072 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:42:20.589078 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:42:20.589085 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:20.589091 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:42:20.589098 | orchestrator | 2026-03-07 00:42:20.589105 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-07 00:42:20.589111 | orchestrator | 2026-03-07 00:42:20.589118 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-07 00:42:20.589125 | orchestrator | Saturday 07 March 2026 00:42:13 +0000 (0:00:01.049) 0:00:02.330 ******** 2026-03-07 00:42:20.589131 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:42:20.589138 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:42:20.589145 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:42:20.589152 | orchestrator | ok: [testbed-manager] 2026-03-07 00:42:20.589159 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:42:20.589166 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:42:20.589172 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:42:20.589179 | orchestrator | 2026-03-07 00:42:20.589185 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-07 00:42:20.589192 | orchestrator | 2026-03-07 00:42:20.589199 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-07 00:42:20.589207 | orchestrator | Saturday 07 March 2026 00:42:19 +0000 (0:00:05.745) 0:00:08.075 ******** 2026-03-07 00:42:20.589215 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:42:20.589223 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:42:20.589231 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:42:20.589239 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:42:20.589260 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:42:20.589268 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:20.589276 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:42:20.589284 | orchestrator | 2026-03-07 00:42:20.589292 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:42:20.589300 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:42:20.589309 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:42:20.589318 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:42:20.589326 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:42:20.589333 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:42:20.589341 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:42:20.589349 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:42:20.589356 | orchestrator | 2026-03-07 00:42:20.589364 | orchestrator | 2026-03-07 00:42:20.589372 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:42:20.589387 | orchestrator | Saturday 07 March 2026 00:42:20 +0000 (0:00:00.500) 0:00:08.575 ******** 2026-03-07 00:42:20.589395 | orchestrator | =============================================================================== 2026-03-07 00:42:20.589403 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.75s 2026-03-07 00:42:20.589411 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.05s 2026-03-07 00:42:20.589419 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.03s 2026-03-07 00:42:20.589426 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.50s 2026-03-07 00:42:22.882336 | orchestrator | 2026-03-07 00:42:22 | INFO  | Task 32a2264e-b2ea-4591-9e4c-593cd5aa4d58 (ceph-configure-lvm-volumes) was prepared for execution. 2026-03-07 00:42:22.882442 | orchestrator | 2026-03-07 00:42:22 | INFO  | It takes a moment until task 32a2264e-b2ea-4591-9e4c-593cd5aa4d58 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-03-07 00:42:34.649235 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-07 00:42:34.649319 | orchestrator | 2.16.14 2026-03-07 00:42:34.649331 | orchestrator | 2026-03-07 00:42:34.649341 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-07 00:42:34.649350 | orchestrator | 2026-03-07 00:42:34.649358 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-07 00:42:34.649366 | orchestrator | Saturday 07 March 2026 00:42:27 +0000 (0:00:00.299) 0:00:00.299 ******** 2026-03-07 00:42:34.649377 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-07 00:42:34.649385 | orchestrator | 2026-03-07 00:42:34.649393 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-07 00:42:34.649401 | orchestrator | Saturday 07 March 2026 00:42:27 +0000 (0:00:00.221) 0:00:00.521 ******** 2026-03-07 00:42:34.649409 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:42:34.649417 | orchestrator | 2026-03-07 00:42:34.649425 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:42:34.649433 | orchestrator | Saturday 07 March 2026 00:42:27 +0000 (0:00:00.206) 0:00:00.727 ******** 2026-03-07 00:42:34.649441 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-07 00:42:34.649449 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-07 00:42:34.649457 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-07 00:42:34.649465 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-07 00:42:34.649473 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-07 00:42:34.649481 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-07 00:42:34.649489 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-07 00:42:34.649496 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-07 00:42:34.649504 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-07 00:42:34.649512 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-07 00:42:34.649520 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-07 00:42:34.649528 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-07 00:42:34.649541 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-07 00:42:34.649549 | orchestrator | 2026-03-07 00:42:34.649557 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:42:34.649565 | orchestrator | Saturday 07 March 2026 00:42:27 +0000 (0:00:00.398) 0:00:01.126 ******** 2026-03-07 00:42:34.649589 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:42:34.649597 | orchestrator | 2026-03-07 00:42:34.649605 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:42:34.649613 | orchestrator | Saturday 07 March 2026 00:42:28 +0000 (0:00:00.169) 0:00:01.296 ******** 2026-03-07 00:42:34.649621 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:42:34.649629 | orchestrator | 2026-03-07 00:42:34.649637 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:42:34.649645 | orchestrator | Saturday 07 March 2026 00:42:28 +0000 (0:00:00.172) 0:00:01.468 ******** 2026-03-07 00:42:34.649653 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:42:34.649661 | orchestrator | 2026-03-07 00:42:34.649668 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:42:34.649676 | orchestrator | Saturday 07 March 2026 00:42:28 +0000 (0:00:00.185) 0:00:01.653 ******** 2026-03-07 00:42:34.649687 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:42:34.649696 | orchestrator | 2026-03-07 00:42:34.649703 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:42:34.649711 | orchestrator | Saturday 07 March 2026 00:42:28 +0000 (0:00:00.177) 0:00:01.831 ******** 2026-03-07 00:42:34.649719 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:42:34.649727 | orchestrator | 2026-03-07 00:42:34.649735 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:42:34.649743 | orchestrator | Saturday 07 March 2026 00:42:28 +0000 (0:00:00.180) 0:00:02.012 ******** 2026-03-07 00:42:34.649750 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:42:34.649758 | orchestrator | 2026-03-07 00:42:34.649766 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:42:34.649774 | orchestrator | Saturday 07 March 2026 00:42:29 +0000 (0:00:00.193) 0:00:02.206 ******** 2026-03-07 00:42:34.649782 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:42:34.649790 | orchestrator | 2026-03-07 00:42:34.649799 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:42:34.649808 | orchestrator | Saturday 07 March 2026 00:42:29 +0000 (0:00:00.194) 0:00:02.400 ******** 2026-03-07 00:42:34.649817 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:42:34.649827 | orchestrator | 2026-03-07 00:42:34.649836 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:42:34.649871 | orchestrator | Saturday 07 March 2026 00:42:29 +0000 (0:00:00.196) 0:00:02.597 ******** 2026-03-07 00:42:34.649885 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_9e6318f9-11cb-4ed8-b0fb-e89153e65f2e) 2026-03-07 00:42:34.649896 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_9e6318f9-11cb-4ed8-b0fb-e89153e65f2e) 2026-03-07 00:42:34.649905 | orchestrator | 2026-03-07 00:42:34.649914 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:42:34.649934 | orchestrator | Saturday 07 March 2026 00:42:29 +0000 (0:00:00.406) 0:00:03.003 ******** 2026-03-07 00:42:34.649944 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d799a894-5671-421e-939f-d4a49d05b62b) 2026-03-07 00:42:34.649953 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d799a894-5671-421e-939f-d4a49d05b62b) 2026-03-07 00:42:34.649962 | orchestrator | 2026-03-07 00:42:34.649972 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:42:34.649986 | orchestrator | Saturday 07 March 2026 00:42:30 +0000 (0:00:00.658) 0:00:03.661 ******** 2026-03-07 00:42:34.650005 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_beed34ce-a5a1-4e0c-b446-348e6964ce68) 2026-03-07 00:42:34.650073 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_beed34ce-a5a1-4e0c-b446-348e6964ce68) 2026-03-07 00:42:34.650086 | orchestrator | 2026-03-07 00:42:34.650097 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:42:34.650109 | orchestrator | Saturday 07 March 2026 00:42:31 +0000 (0:00:00.623) 0:00:04.285 ******** 2026-03-07 00:42:34.650133 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_99318194-5870-4346-ba42-ca8c5b557f89) 2026-03-07 00:42:34.650147 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_99318194-5870-4346-ba42-ca8c5b557f89) 2026-03-07 00:42:34.650160 | orchestrator | 2026-03-07 00:42:34.650173 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:42:34.650186 | orchestrator | Saturday 07 March 2026 00:42:31 +0000 (0:00:00.817) 0:00:05.102 ******** 2026-03-07 00:42:34.650199 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-07 00:42:34.650211 | orchestrator | 2026-03-07 00:42:34.650224 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:42:34.650238 | orchestrator | Saturday 07 March 2026 00:42:32 +0000 (0:00:00.370) 0:00:05.472 ******** 2026-03-07 00:42:34.650259 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-07 00:42:34.650273 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-07 00:42:34.650281 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-07 00:42:34.650289 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-07 00:42:34.650297 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-07 00:42:34.650305 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-07 00:42:34.650312 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-07 00:42:34.650320 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-07 00:42:34.650328 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-07 00:42:34.650336 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-07 00:42:34.650343 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-07 00:42:34.650351 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-07 00:42:34.650359 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-07 00:42:34.650366 | orchestrator | 2026-03-07 00:42:34.650374 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:42:34.650382 | orchestrator | Saturday 07 March 2026 00:42:32 +0000 (0:00:00.606) 0:00:06.079 ******** 2026-03-07 00:42:34.650390 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:42:34.650398 | orchestrator | 2026-03-07 00:42:34.650405 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:42:34.650413 | orchestrator | Saturday 07 March 2026 00:42:33 +0000 (0:00:00.283) 0:00:06.363 ******** 2026-03-07 00:42:34.650421 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:42:34.650429 | orchestrator | 2026-03-07 00:42:34.650436 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:42:34.650444 | orchestrator | Saturday 07 March 2026 00:42:33 +0000 (0:00:00.267) 0:00:06.630 ******** 2026-03-07 00:42:34.650452 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:42:34.650459 | orchestrator | 2026-03-07 00:42:34.650467 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:42:34.650475 | orchestrator | Saturday 07 March 2026 00:42:33 +0000 (0:00:00.244) 0:00:06.875 ******** 2026-03-07 00:42:34.650483 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:42:34.650490 | orchestrator | 2026-03-07 00:42:34.650498 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:42:34.650506 | orchestrator | Saturday 07 March 2026 00:42:33 +0000 (0:00:00.277) 0:00:07.153 ******** 2026-03-07 00:42:34.650514 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:42:34.650528 | orchestrator | 2026-03-07 00:42:34.650536 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:42:34.650544 | orchestrator | Saturday 07 March 2026 00:42:34 +0000 (0:00:00.250) 0:00:07.404 ******** 2026-03-07 00:42:34.650551 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:42:34.650559 | orchestrator | 2026-03-07 00:42:34.650567 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:42:34.650574 | orchestrator | Saturday 07 March 2026 00:42:34 +0000 (0:00:00.227) 0:00:07.632 ******** 2026-03-07 00:42:34.650582 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:42:34.650590 | orchestrator | 2026-03-07 00:42:34.650605 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:42:42.252601 | orchestrator | Saturday 07 March 2026 00:42:34 +0000 (0:00:00.212) 0:00:07.845 ******** 2026-03-07 00:42:42.252702 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:42:42.252715 | orchestrator | 2026-03-07 00:42:42.252725 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:42:42.252733 | orchestrator | Saturday 07 March 2026 00:42:34 +0000 (0:00:00.222) 0:00:08.067 ******** 2026-03-07 00:42:42.252741 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-07 00:42:42.252750 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-07 00:42:42.252758 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-07 00:42:42.252765 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-07 00:42:42.252772 | orchestrator | 2026-03-07 00:42:42.252780 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:42:42.252787 | orchestrator | Saturday 07 March 2026 00:42:36 +0000 (0:00:01.196) 0:00:09.264 ******** 2026-03-07 00:42:42.252794 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:42:42.252802 | orchestrator | 2026-03-07 00:42:42.252809 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:42:42.252816 | orchestrator | Saturday 07 March 2026 00:42:36 +0000 (0:00:00.222) 0:00:09.486 ******** 2026-03-07 00:42:42.252823 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:42:42.252831 | orchestrator | 2026-03-07 00:42:42.252838 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:42:42.252845 | orchestrator | Saturday 07 March 2026 00:42:36 +0000 (0:00:00.214) 0:00:09.701 ******** 2026-03-07 00:42:42.252852 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:42:42.252859 | orchestrator | 2026-03-07 00:42:42.252867 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:42:42.252874 | orchestrator | Saturday 07 March 2026 00:42:36 +0000 (0:00:00.203) 0:00:09.904 ******** 2026-03-07 00:42:42.252881 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:42:42.252888 | orchestrator | 2026-03-07 00:42:42.252895 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-07 00:42:42.252962 | orchestrator | Saturday 07 March 2026 00:42:36 +0000 (0:00:00.208) 0:00:10.113 ******** 2026-03-07 00:42:42.252972 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-03-07 00:42:42.252980 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-03-07 00:42:42.252987 | orchestrator | 2026-03-07 00:42:42.252994 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-07 00:42:42.253002 | orchestrator | Saturday 07 March 2026 00:42:37 +0000 (0:00:00.227) 0:00:10.341 ******** 2026-03-07 00:42:42.253009 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:42:42.253016 | orchestrator | 2026-03-07 00:42:42.253023 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-07 00:42:42.253049 | orchestrator | Saturday 07 March 2026 00:42:37 +0000 (0:00:00.142) 0:00:10.483 ******** 2026-03-07 00:42:42.253057 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:42:42.253064 | orchestrator | 2026-03-07 00:42:42.253071 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-07 00:42:42.253079 | orchestrator | Saturday 07 March 2026 00:42:37 +0000 (0:00:00.139) 0:00:10.622 ******** 2026-03-07 00:42:42.253106 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:42:42.253114 | orchestrator | 2026-03-07 00:42:42.253122 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-07 00:42:42.253129 | orchestrator | Saturday 07 March 2026 00:42:37 +0000 (0:00:00.144) 0:00:10.767 ******** 2026-03-07 00:42:42.253136 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:42:42.253145 | orchestrator | 2026-03-07 00:42:42.253153 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-07 00:42:42.253162 | orchestrator | Saturday 07 March 2026 00:42:37 +0000 (0:00:00.154) 0:00:10.922 ******** 2026-03-07 00:42:42.253170 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3529c73b-8337-5a09-bb85-f9958b3a6115'}}) 2026-03-07 00:42:42.253179 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5644fa9a-696a-5a4b-ae2f-cbc58e712aba'}}) 2026-03-07 00:42:42.253188 | orchestrator | 2026-03-07 00:42:42.253196 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-07 00:42:42.253205 | orchestrator | Saturday 07 March 2026 00:42:37 +0000 (0:00:00.170) 0:00:11.092 ******** 2026-03-07 00:42:42.253214 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3529c73b-8337-5a09-bb85-f9958b3a6115'}})  2026-03-07 00:42:42.253229 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5644fa9a-696a-5a4b-ae2f-cbc58e712aba'}})  2026-03-07 00:42:42.253237 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:42:42.253246 | orchestrator | 2026-03-07 00:42:42.253254 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-07 00:42:42.253266 | orchestrator | Saturday 07 March 2026 00:42:38 +0000 (0:00:00.172) 0:00:11.265 ******** 2026-03-07 00:42:42.253279 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3529c73b-8337-5a09-bb85-f9958b3a6115'}})  2026-03-07 00:42:42.253291 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5644fa9a-696a-5a4b-ae2f-cbc58e712aba'}})  2026-03-07 00:42:42.253303 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:42:42.253314 | orchestrator | 2026-03-07 00:42:42.253326 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-07 00:42:42.253338 | orchestrator | Saturday 07 March 2026 00:42:38 +0000 (0:00:00.389) 0:00:11.654 ******** 2026-03-07 00:42:42.253350 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3529c73b-8337-5a09-bb85-f9958b3a6115'}})  2026-03-07 00:42:42.253379 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5644fa9a-696a-5a4b-ae2f-cbc58e712aba'}})  2026-03-07 00:42:42.253389 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:42:42.253397 | orchestrator | 2026-03-07 00:42:42.253405 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-07 00:42:42.253414 | orchestrator | Saturday 07 March 2026 00:42:38 +0000 (0:00:00.158) 0:00:11.813 ******** 2026-03-07 00:42:42.253422 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:42:42.253430 | orchestrator | 2026-03-07 00:42:42.253439 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-07 00:42:42.253447 | orchestrator | Saturday 07 March 2026 00:42:38 +0000 (0:00:00.156) 0:00:11.969 ******** 2026-03-07 00:42:42.253455 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:42:42.253463 | orchestrator | 2026-03-07 00:42:42.253476 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-07 00:42:42.253485 | orchestrator | Saturday 07 March 2026 00:42:38 +0000 (0:00:00.158) 0:00:12.128 ******** 2026-03-07 00:42:42.253493 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:42:42.253502 | orchestrator | 2026-03-07 00:42:42.253509 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-07 00:42:42.253516 | orchestrator | Saturday 07 March 2026 00:42:39 +0000 (0:00:00.135) 0:00:12.264 ******** 2026-03-07 00:42:42.253530 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:42:42.253537 | orchestrator | 2026-03-07 00:42:42.253544 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-07 00:42:42.253552 | orchestrator | Saturday 07 March 2026 00:42:39 +0000 (0:00:00.133) 0:00:12.397 ******** 2026-03-07 00:42:42.253559 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:42:42.253566 | orchestrator | 2026-03-07 00:42:42.253573 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-07 00:42:42.253581 | orchestrator | Saturday 07 March 2026 00:42:39 +0000 (0:00:00.138) 0:00:12.536 ******** 2026-03-07 00:42:42.253588 | orchestrator | ok: [testbed-node-3] => { 2026-03-07 00:42:42.253595 | orchestrator |  "ceph_osd_devices": { 2026-03-07 00:42:42.253602 | orchestrator |  "sdb": { 2026-03-07 00:42:42.253610 | orchestrator |  "osd_lvm_uuid": "3529c73b-8337-5a09-bb85-f9958b3a6115" 2026-03-07 00:42:42.253617 | orchestrator |  }, 2026-03-07 00:42:42.253625 | orchestrator |  "sdc": { 2026-03-07 00:42:42.253632 | orchestrator |  "osd_lvm_uuid": "5644fa9a-696a-5a4b-ae2f-cbc58e712aba" 2026-03-07 00:42:42.253639 | orchestrator |  } 2026-03-07 00:42:42.253646 | orchestrator |  } 2026-03-07 00:42:42.253654 | orchestrator | } 2026-03-07 00:42:42.253661 | orchestrator | 2026-03-07 00:42:42.253668 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-07 00:42:42.253676 | orchestrator | Saturday 07 March 2026 00:42:39 +0000 (0:00:00.143) 0:00:12.679 ******** 2026-03-07 00:42:42.253683 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:42:42.253690 | orchestrator | 2026-03-07 00:42:42.253697 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-07 00:42:42.253704 | orchestrator | Saturday 07 March 2026 00:42:39 +0000 (0:00:00.143) 0:00:12.822 ******** 2026-03-07 00:42:42.253712 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:42:42.253719 | orchestrator | 2026-03-07 00:42:42.253726 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-07 00:42:42.253733 | orchestrator | Saturday 07 March 2026 00:42:39 +0000 (0:00:00.140) 0:00:12.963 ******** 2026-03-07 00:42:42.253740 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:42:42.253747 | orchestrator | 2026-03-07 00:42:42.253755 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-07 00:42:42.253762 | orchestrator | Saturday 07 March 2026 00:42:39 +0000 (0:00:00.130) 0:00:13.094 ******** 2026-03-07 00:42:42.253769 | orchestrator | changed: [testbed-node-3] => { 2026-03-07 00:42:42.253776 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-07 00:42:42.253784 | orchestrator |  "ceph_osd_devices": { 2026-03-07 00:42:42.253791 | orchestrator |  "sdb": { 2026-03-07 00:42:42.253798 | orchestrator |  "osd_lvm_uuid": "3529c73b-8337-5a09-bb85-f9958b3a6115" 2026-03-07 00:42:42.253805 | orchestrator |  }, 2026-03-07 00:42:42.253813 | orchestrator |  "sdc": { 2026-03-07 00:42:42.253820 | orchestrator |  "osd_lvm_uuid": "5644fa9a-696a-5a4b-ae2f-cbc58e712aba" 2026-03-07 00:42:42.253827 | orchestrator |  } 2026-03-07 00:42:42.253834 | orchestrator |  }, 2026-03-07 00:42:42.253841 | orchestrator |  "lvm_volumes": [ 2026-03-07 00:42:42.253849 | orchestrator |  { 2026-03-07 00:42:42.253856 | orchestrator |  "data": "osd-block-3529c73b-8337-5a09-bb85-f9958b3a6115", 2026-03-07 00:42:42.253863 | orchestrator |  "data_vg": "ceph-3529c73b-8337-5a09-bb85-f9958b3a6115" 2026-03-07 00:42:42.253870 | orchestrator |  }, 2026-03-07 00:42:42.253878 | orchestrator |  { 2026-03-07 00:42:42.253885 | orchestrator |  "data": "osd-block-5644fa9a-696a-5a4b-ae2f-cbc58e712aba", 2026-03-07 00:42:42.253892 | orchestrator |  "data_vg": "ceph-5644fa9a-696a-5a4b-ae2f-cbc58e712aba" 2026-03-07 00:42:42.253899 | orchestrator |  } 2026-03-07 00:42:42.253927 | orchestrator |  ] 2026-03-07 00:42:42.253934 | orchestrator |  } 2026-03-07 00:42:42.253941 | orchestrator | } 2026-03-07 00:42:42.253954 | orchestrator | 2026-03-07 00:42:42.253961 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-07 00:42:42.253968 | orchestrator | Saturday 07 March 2026 00:42:40 +0000 (0:00:00.448) 0:00:13.542 ******** 2026-03-07 00:42:42.253976 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-07 00:42:42.253983 | orchestrator | 2026-03-07 00:42:42.253994 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-07 00:42:42.254002 | orchestrator | 2026-03-07 00:42:42.254009 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-07 00:42:42.254056 | orchestrator | Saturday 07 March 2026 00:42:41 +0000 (0:00:01.496) 0:00:15.038 ******** 2026-03-07 00:42:42.254064 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-07 00:42:42.254071 | orchestrator | 2026-03-07 00:42:42.254079 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-07 00:42:42.254086 | orchestrator | Saturday 07 March 2026 00:42:42 +0000 (0:00:00.212) 0:00:15.251 ******** 2026-03-07 00:42:42.254094 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:42:42.254101 | orchestrator | 2026-03-07 00:42:42.254113 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:42:49.662812 | orchestrator | Saturday 07 March 2026 00:42:42 +0000 (0:00:00.201) 0:00:15.452 ******** 2026-03-07 00:42:49.662908 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-07 00:42:49.662919 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-07 00:42:49.662927 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-07 00:42:49.662934 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-07 00:42:49.662942 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-07 00:42:49.662949 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-07 00:42:49.663000 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-07 00:42:49.663009 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-07 00:42:49.663016 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-07 00:42:49.663024 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-07 00:42:49.663031 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-07 00:42:49.663038 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-07 00:42:49.663049 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-07 00:42:49.663057 | orchestrator | 2026-03-07 00:42:49.663069 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:42:49.663081 | orchestrator | Saturday 07 March 2026 00:42:42 +0000 (0:00:00.340) 0:00:15.793 ******** 2026-03-07 00:42:49.663094 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:49.663107 | orchestrator | 2026-03-07 00:42:49.663119 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:42:49.663133 | orchestrator | Saturday 07 March 2026 00:42:42 +0000 (0:00:00.180) 0:00:15.974 ******** 2026-03-07 00:42:49.663149 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:49.663160 | orchestrator | 2026-03-07 00:42:49.663171 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:42:49.663182 | orchestrator | Saturday 07 March 2026 00:42:42 +0000 (0:00:00.191) 0:00:16.166 ******** 2026-03-07 00:42:49.663194 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:49.663205 | orchestrator | 2026-03-07 00:42:49.663217 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:42:49.663229 | orchestrator | Saturday 07 March 2026 00:42:43 +0000 (0:00:00.165) 0:00:16.331 ******** 2026-03-07 00:42:49.663270 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:49.663282 | orchestrator | 2026-03-07 00:42:49.663310 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:42:49.663332 | orchestrator | Saturday 07 March 2026 00:42:43 +0000 (0:00:00.153) 0:00:16.484 ******** 2026-03-07 00:42:49.663345 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:49.663358 | orchestrator | 2026-03-07 00:42:49.663369 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:42:49.663382 | orchestrator | Saturday 07 March 2026 00:42:43 +0000 (0:00:00.454) 0:00:16.938 ******** 2026-03-07 00:42:49.663394 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:49.663406 | orchestrator | 2026-03-07 00:42:49.663419 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:42:49.663432 | orchestrator | Saturday 07 March 2026 00:42:43 +0000 (0:00:00.169) 0:00:17.108 ******** 2026-03-07 00:42:49.663445 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:49.663457 | orchestrator | 2026-03-07 00:42:49.663465 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:42:49.663474 | orchestrator | Saturday 07 March 2026 00:42:44 +0000 (0:00:00.178) 0:00:17.287 ******** 2026-03-07 00:42:49.663483 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:49.663492 | orchestrator | 2026-03-07 00:42:49.663517 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:42:49.663526 | orchestrator | Saturday 07 March 2026 00:42:44 +0000 (0:00:00.174) 0:00:17.462 ******** 2026-03-07 00:42:49.663538 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_e807b00a-8b7b-48ce-9460-0e3636b06250) 2026-03-07 00:42:49.663552 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_e807b00a-8b7b-48ce-9460-0e3636b06250) 2026-03-07 00:42:49.663564 | orchestrator | 2026-03-07 00:42:49.663576 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:42:49.663587 | orchestrator | Saturday 07 March 2026 00:42:44 +0000 (0:00:00.398) 0:00:17.861 ******** 2026-03-07 00:42:49.663600 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_af4ba259-cb6f-4fcf-8c2a-944dae969065) 2026-03-07 00:42:49.663613 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_af4ba259-cb6f-4fcf-8c2a-944dae969065) 2026-03-07 00:42:49.663625 | orchestrator | 2026-03-07 00:42:49.663638 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:42:49.663651 | orchestrator | Saturday 07 March 2026 00:42:45 +0000 (0:00:00.423) 0:00:18.285 ******** 2026-03-07 00:42:49.663664 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_3c3014e6-40a5-4340-97e1-b63d744f1dc5) 2026-03-07 00:42:49.663678 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_3c3014e6-40a5-4340-97e1-b63d744f1dc5) 2026-03-07 00:42:49.663690 | orchestrator | 2026-03-07 00:42:49.663707 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:42:49.663746 | orchestrator | Saturday 07 March 2026 00:42:45 +0000 (0:00:00.436) 0:00:18.721 ******** 2026-03-07 00:42:49.663763 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_4fdacbb0-1c31-482c-97c6-063a331da0fc) 2026-03-07 00:42:49.663776 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_4fdacbb0-1c31-482c-97c6-063a331da0fc) 2026-03-07 00:42:49.663789 | orchestrator | 2026-03-07 00:42:49.663801 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:42:49.663814 | orchestrator | Saturday 07 March 2026 00:42:45 +0000 (0:00:00.439) 0:00:19.161 ******** 2026-03-07 00:42:49.663827 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-07 00:42:49.663839 | orchestrator | 2026-03-07 00:42:49.663851 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:42:49.663863 | orchestrator | Saturday 07 March 2026 00:42:46 +0000 (0:00:00.369) 0:00:19.530 ******** 2026-03-07 00:42:49.663875 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-07 00:42:49.663898 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-07 00:42:49.663916 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-07 00:42:49.663931 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-07 00:42:49.663943 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-07 00:42:49.663983 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-07 00:42:49.664002 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-07 00:42:49.664017 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-07 00:42:49.664029 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-07 00:42:49.664041 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-07 00:42:49.664053 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-07 00:42:49.664065 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-07 00:42:49.664077 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-07 00:42:49.664089 | orchestrator | 2026-03-07 00:42:49.664101 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:42:49.664113 | orchestrator | Saturday 07 March 2026 00:42:46 +0000 (0:00:00.468) 0:00:19.999 ******** 2026-03-07 00:42:49.664126 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:49.664137 | orchestrator | 2026-03-07 00:42:49.664149 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:42:49.664161 | orchestrator | Saturday 07 March 2026 00:42:47 +0000 (0:00:00.567) 0:00:20.567 ******** 2026-03-07 00:42:49.664173 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:49.664185 | orchestrator | 2026-03-07 00:42:49.664197 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:42:49.664209 | orchestrator | Saturday 07 March 2026 00:42:47 +0000 (0:00:00.194) 0:00:20.761 ******** 2026-03-07 00:42:49.664221 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:49.664232 | orchestrator | 2026-03-07 00:42:49.664245 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:42:49.664257 | orchestrator | Saturday 07 March 2026 00:42:47 +0000 (0:00:00.224) 0:00:20.986 ******** 2026-03-07 00:42:49.664279 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:49.664291 | orchestrator | 2026-03-07 00:42:49.664302 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:42:49.664314 | orchestrator | Saturday 07 March 2026 00:42:48 +0000 (0:00:00.223) 0:00:21.209 ******** 2026-03-07 00:42:49.664326 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:49.664338 | orchestrator | 2026-03-07 00:42:49.664350 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:42:49.664362 | orchestrator | Saturday 07 March 2026 00:42:48 +0000 (0:00:00.199) 0:00:21.408 ******** 2026-03-07 00:42:49.664380 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:49.664394 | orchestrator | 2026-03-07 00:42:49.664406 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:42:49.664417 | orchestrator | Saturday 07 March 2026 00:42:48 +0000 (0:00:00.191) 0:00:21.600 ******** 2026-03-07 00:42:49.664429 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:49.664441 | orchestrator | 2026-03-07 00:42:49.664452 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:42:49.664467 | orchestrator | Saturday 07 March 2026 00:42:48 +0000 (0:00:00.193) 0:00:21.794 ******** 2026-03-07 00:42:49.664484 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:49.664507 | orchestrator | 2026-03-07 00:42:49.664519 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:42:49.664531 | orchestrator | Saturday 07 March 2026 00:42:48 +0000 (0:00:00.243) 0:00:22.037 ******** 2026-03-07 00:42:49.664543 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-07 00:42:49.664561 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-07 00:42:49.664575 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-07 00:42:49.664585 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-07 00:42:49.664596 | orchestrator | 2026-03-07 00:42:49.664607 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:42:49.664619 | orchestrator | Saturday 07 March 2026 00:42:49 +0000 (0:00:00.633) 0:00:22.670 ******** 2026-03-07 00:42:49.664630 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:56.995476 | orchestrator | 2026-03-07 00:42:56.995654 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:42:56.995676 | orchestrator | Saturday 07 March 2026 00:42:49 +0000 (0:00:00.194) 0:00:22.865 ******** 2026-03-07 00:42:56.995689 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:56.995706 | orchestrator | 2026-03-07 00:42:56.995753 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:42:56.995791 | orchestrator | Saturday 07 March 2026 00:42:49 +0000 (0:00:00.190) 0:00:23.055 ******** 2026-03-07 00:42:56.995808 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:56.995827 | orchestrator | 2026-03-07 00:42:56.995846 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:42:56.995865 | orchestrator | Saturday 07 March 2026 00:42:50 +0000 (0:00:00.246) 0:00:23.301 ******** 2026-03-07 00:42:56.995884 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:56.995902 | orchestrator | 2026-03-07 00:42:56.995916 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-07 00:42:56.995927 | orchestrator | Saturday 07 March 2026 00:42:50 +0000 (0:00:00.588) 0:00:23.890 ******** 2026-03-07 00:42:56.995938 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-03-07 00:42:56.995949 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-03-07 00:42:56.995960 | orchestrator | 2026-03-07 00:42:56.995971 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-07 00:42:56.995984 | orchestrator | Saturday 07 March 2026 00:42:50 +0000 (0:00:00.165) 0:00:24.055 ******** 2026-03-07 00:42:56.995997 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:56.996038 | orchestrator | 2026-03-07 00:42:56.996053 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-07 00:42:56.996067 | orchestrator | Saturday 07 March 2026 00:42:51 +0000 (0:00:00.148) 0:00:24.204 ******** 2026-03-07 00:42:56.996080 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:56.996092 | orchestrator | 2026-03-07 00:42:56.996105 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-07 00:42:56.996119 | orchestrator | Saturday 07 March 2026 00:42:51 +0000 (0:00:00.129) 0:00:24.334 ******** 2026-03-07 00:42:56.996131 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:56.996144 | orchestrator | 2026-03-07 00:42:56.996156 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-07 00:42:56.996173 | orchestrator | Saturday 07 March 2026 00:42:51 +0000 (0:00:00.148) 0:00:24.482 ******** 2026-03-07 00:42:56.996192 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:42:56.996212 | orchestrator | 2026-03-07 00:42:56.996239 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-07 00:42:56.996261 | orchestrator | Saturday 07 March 2026 00:42:51 +0000 (0:00:00.122) 0:00:24.605 ******** 2026-03-07 00:42:56.996280 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '030f8481-3d62-5800-8c17-c22bf68268ab'}}) 2026-03-07 00:42:56.996299 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8595c920-fb8d-5336-8a83-206e7467f719'}}) 2026-03-07 00:42:56.996348 | orchestrator | 2026-03-07 00:42:56.996366 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-07 00:42:56.996383 | orchestrator | Saturday 07 March 2026 00:42:51 +0000 (0:00:00.160) 0:00:24.766 ******** 2026-03-07 00:42:56.996400 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '030f8481-3d62-5800-8c17-c22bf68268ab'}})  2026-03-07 00:42:56.996420 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8595c920-fb8d-5336-8a83-206e7467f719'}})  2026-03-07 00:42:56.996439 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:56.996456 | orchestrator | 2026-03-07 00:42:56.996475 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-07 00:42:56.996492 | orchestrator | Saturday 07 March 2026 00:42:51 +0000 (0:00:00.134) 0:00:24.900 ******** 2026-03-07 00:42:56.996509 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '030f8481-3d62-5800-8c17-c22bf68268ab'}})  2026-03-07 00:42:56.996546 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8595c920-fb8d-5336-8a83-206e7467f719'}})  2026-03-07 00:42:56.996566 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:56.996586 | orchestrator | 2026-03-07 00:42:56.996603 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-07 00:42:56.996621 | orchestrator | Saturday 07 March 2026 00:42:51 +0000 (0:00:00.140) 0:00:25.040 ******** 2026-03-07 00:42:56.996633 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '030f8481-3d62-5800-8c17-c22bf68268ab'}})  2026-03-07 00:42:56.996645 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8595c920-fb8d-5336-8a83-206e7467f719'}})  2026-03-07 00:42:56.996656 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:56.996667 | orchestrator | 2026-03-07 00:42:56.996679 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-07 00:42:56.996690 | orchestrator | Saturday 07 March 2026 00:42:51 +0000 (0:00:00.139) 0:00:25.179 ******** 2026-03-07 00:42:56.996701 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:42:56.996712 | orchestrator | 2026-03-07 00:42:56.996723 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-07 00:42:56.996734 | orchestrator | Saturday 07 March 2026 00:42:52 +0000 (0:00:00.132) 0:00:25.311 ******** 2026-03-07 00:42:56.996745 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:42:56.996756 | orchestrator | 2026-03-07 00:42:56.996766 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-07 00:42:56.996777 | orchestrator | Saturday 07 March 2026 00:42:52 +0000 (0:00:00.149) 0:00:25.461 ******** 2026-03-07 00:42:56.996810 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:56.996822 | orchestrator | 2026-03-07 00:42:56.996833 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-07 00:42:56.996844 | orchestrator | Saturday 07 March 2026 00:42:52 +0000 (0:00:00.373) 0:00:25.835 ******** 2026-03-07 00:42:56.996855 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:56.996867 | orchestrator | 2026-03-07 00:42:56.996886 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-07 00:42:56.996911 | orchestrator | Saturday 07 March 2026 00:42:52 +0000 (0:00:00.163) 0:00:25.998 ******** 2026-03-07 00:42:56.996933 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:56.996951 | orchestrator | 2026-03-07 00:42:56.996968 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-07 00:42:56.996985 | orchestrator | Saturday 07 March 2026 00:42:52 +0000 (0:00:00.175) 0:00:26.174 ******** 2026-03-07 00:42:56.997003 | orchestrator | ok: [testbed-node-4] => { 2026-03-07 00:42:56.997049 | orchestrator |  "ceph_osd_devices": { 2026-03-07 00:42:56.997066 | orchestrator |  "sdb": { 2026-03-07 00:42:56.997083 | orchestrator |  "osd_lvm_uuid": "030f8481-3d62-5800-8c17-c22bf68268ab" 2026-03-07 00:42:56.997102 | orchestrator |  }, 2026-03-07 00:42:56.997135 | orchestrator |  "sdc": { 2026-03-07 00:42:56.997153 | orchestrator |  "osd_lvm_uuid": "8595c920-fb8d-5336-8a83-206e7467f719" 2026-03-07 00:42:56.997170 | orchestrator |  } 2026-03-07 00:42:56.997189 | orchestrator |  } 2026-03-07 00:42:56.997209 | orchestrator | } 2026-03-07 00:42:56.997226 | orchestrator | 2026-03-07 00:42:56.997246 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-07 00:42:56.997264 | orchestrator | Saturday 07 March 2026 00:42:53 +0000 (0:00:00.152) 0:00:26.327 ******** 2026-03-07 00:42:56.997282 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:56.997301 | orchestrator | 2026-03-07 00:42:56.997319 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-07 00:42:56.997337 | orchestrator | Saturday 07 March 2026 00:42:53 +0000 (0:00:00.148) 0:00:26.476 ******** 2026-03-07 00:42:56.997349 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:56.997360 | orchestrator | 2026-03-07 00:42:56.997370 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-07 00:42:56.997381 | orchestrator | Saturday 07 March 2026 00:42:53 +0000 (0:00:00.146) 0:00:26.623 ******** 2026-03-07 00:42:56.997392 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:42:56.997404 | orchestrator | 2026-03-07 00:42:56.997415 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-07 00:42:56.997426 | orchestrator | Saturday 07 March 2026 00:42:53 +0000 (0:00:00.158) 0:00:26.781 ******** 2026-03-07 00:42:56.997437 | orchestrator | changed: [testbed-node-4] => { 2026-03-07 00:42:56.997448 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-07 00:42:56.997460 | orchestrator |  "ceph_osd_devices": { 2026-03-07 00:42:56.997491 | orchestrator |  "sdb": { 2026-03-07 00:42:56.997503 | orchestrator |  "osd_lvm_uuid": "030f8481-3d62-5800-8c17-c22bf68268ab" 2026-03-07 00:42:56.997514 | orchestrator |  }, 2026-03-07 00:42:56.997525 | orchestrator |  "sdc": { 2026-03-07 00:42:56.997536 | orchestrator |  "osd_lvm_uuid": "8595c920-fb8d-5336-8a83-206e7467f719" 2026-03-07 00:42:56.997547 | orchestrator |  } 2026-03-07 00:42:56.997558 | orchestrator |  }, 2026-03-07 00:42:56.997569 | orchestrator |  "lvm_volumes": [ 2026-03-07 00:42:56.997580 | orchestrator |  { 2026-03-07 00:42:56.997592 | orchestrator |  "data": "osd-block-030f8481-3d62-5800-8c17-c22bf68268ab", 2026-03-07 00:42:56.997603 | orchestrator |  "data_vg": "ceph-030f8481-3d62-5800-8c17-c22bf68268ab" 2026-03-07 00:42:56.997614 | orchestrator |  }, 2026-03-07 00:42:56.997625 | orchestrator |  { 2026-03-07 00:42:56.997636 | orchestrator |  "data": "osd-block-8595c920-fb8d-5336-8a83-206e7467f719", 2026-03-07 00:42:56.997647 | orchestrator |  "data_vg": "ceph-8595c920-fb8d-5336-8a83-206e7467f719" 2026-03-07 00:42:56.997658 | orchestrator |  } 2026-03-07 00:42:56.997669 | orchestrator |  ] 2026-03-07 00:42:56.997680 | orchestrator |  } 2026-03-07 00:42:56.997691 | orchestrator | } 2026-03-07 00:42:56.997702 | orchestrator | 2026-03-07 00:42:56.997713 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-07 00:42:56.997724 | orchestrator | Saturday 07 March 2026 00:42:53 +0000 (0:00:00.388) 0:00:27.170 ******** 2026-03-07 00:42:56.997735 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-07 00:42:56.997746 | orchestrator | 2026-03-07 00:42:56.997757 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-07 00:42:56.997768 | orchestrator | 2026-03-07 00:42:56.997779 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-07 00:42:56.997789 | orchestrator | Saturday 07 March 2026 00:42:55 +0000 (0:00:01.264) 0:00:28.435 ******** 2026-03-07 00:42:56.997800 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-07 00:42:56.997811 | orchestrator | 2026-03-07 00:42:56.997822 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-07 00:42:56.997844 | orchestrator | Saturday 07 March 2026 00:42:56 +0000 (0:00:00.926) 0:00:29.361 ******** 2026-03-07 00:42:56.997855 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:42:56.997874 | orchestrator | 2026-03-07 00:42:56.997891 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:42:56.997919 | orchestrator | Saturday 07 March 2026 00:42:56 +0000 (0:00:00.338) 0:00:29.700 ******** 2026-03-07 00:42:56.997937 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-07 00:42:56.997953 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-07 00:42:56.997983 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-07 00:42:56.998002 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-07 00:42:56.998144 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-07 00:42:56.998173 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-07 00:43:04.428048 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-07 00:43:04.428164 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-07 00:43:04.428175 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-07 00:43:04.428183 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-07 00:43:04.428191 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-07 00:43:04.428198 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-07 00:43:04.428204 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-07 00:43:04.428211 | orchestrator | 2026-03-07 00:43:04.428218 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:43:04.428226 | orchestrator | Saturday 07 March 2026 00:42:56 +0000 (0:00:00.481) 0:00:30.181 ******** 2026-03-07 00:43:04.428232 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:43:04.428240 | orchestrator | 2026-03-07 00:43:04.428246 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:43:04.428253 | orchestrator | Saturday 07 March 2026 00:42:57 +0000 (0:00:00.217) 0:00:30.399 ******** 2026-03-07 00:43:04.428260 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:43:04.428266 | orchestrator | 2026-03-07 00:43:04.428273 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:43:04.428280 | orchestrator | Saturday 07 March 2026 00:42:57 +0000 (0:00:00.189) 0:00:30.588 ******** 2026-03-07 00:43:04.428286 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:43:04.428292 | orchestrator | 2026-03-07 00:43:04.428299 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:43:04.428305 | orchestrator | Saturday 07 March 2026 00:42:57 +0000 (0:00:00.248) 0:00:30.836 ******** 2026-03-07 00:43:04.428311 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:43:04.428318 | orchestrator | 2026-03-07 00:43:04.428324 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:43:04.428330 | orchestrator | Saturday 07 March 2026 00:42:57 +0000 (0:00:00.205) 0:00:31.042 ******** 2026-03-07 00:43:04.428337 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:43:04.428343 | orchestrator | 2026-03-07 00:43:04.428350 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:43:04.428356 | orchestrator | Saturday 07 March 2026 00:42:58 +0000 (0:00:00.207) 0:00:31.249 ******** 2026-03-07 00:43:04.428363 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:43:04.428369 | orchestrator | 2026-03-07 00:43:04.428376 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:43:04.428382 | orchestrator | Saturday 07 March 2026 00:42:58 +0000 (0:00:00.183) 0:00:31.433 ******** 2026-03-07 00:43:04.428409 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:43:04.428415 | orchestrator | 2026-03-07 00:43:04.428422 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:43:04.428428 | orchestrator | Saturday 07 March 2026 00:42:58 +0000 (0:00:00.175) 0:00:31.608 ******** 2026-03-07 00:43:04.428434 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:43:04.428441 | orchestrator | 2026-03-07 00:43:04.428448 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:43:04.428454 | orchestrator | Saturday 07 March 2026 00:42:58 +0000 (0:00:00.186) 0:00:31.794 ******** 2026-03-07 00:43:04.428461 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ea170c0f-a027-4120-b295-61114d65555d) 2026-03-07 00:43:04.428468 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ea170c0f-a027-4120-b295-61114d65555d) 2026-03-07 00:43:04.428474 | orchestrator | 2026-03-07 00:43:04.428481 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:43:04.428487 | orchestrator | Saturday 07 March 2026 00:42:59 +0000 (0:00:00.754) 0:00:32.549 ******** 2026-03-07 00:43:04.428493 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_fc232e22-cf7b-4f47-aee0-37a45820ed30) 2026-03-07 00:43:04.428500 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_fc232e22-cf7b-4f47-aee0-37a45820ed30) 2026-03-07 00:43:04.428507 | orchestrator | 2026-03-07 00:43:04.428513 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:43:04.428519 | orchestrator | Saturday 07 March 2026 00:42:59 +0000 (0:00:00.381) 0:00:32.930 ******** 2026-03-07 00:43:04.428526 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_81cf8acf-ab0c-4c96-8ca2-b696b28e7835) 2026-03-07 00:43:04.428532 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_81cf8acf-ab0c-4c96-8ca2-b696b28e7835) 2026-03-07 00:43:04.428539 | orchestrator | 2026-03-07 00:43:04.428545 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:43:04.428551 | orchestrator | Saturday 07 March 2026 00:43:00 +0000 (0:00:00.408) 0:00:33.339 ******** 2026-03-07 00:43:04.428557 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_fa6ba2e8-2fa1-4496-a2d0-ef7dd6ca5952) 2026-03-07 00:43:04.428564 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_fa6ba2e8-2fa1-4496-a2d0-ef7dd6ca5952) 2026-03-07 00:43:04.428570 | orchestrator | 2026-03-07 00:43:04.428576 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:43:04.428582 | orchestrator | Saturday 07 March 2026 00:43:00 +0000 (0:00:00.439) 0:00:33.778 ******** 2026-03-07 00:43:04.428588 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-07 00:43:04.428595 | orchestrator | 2026-03-07 00:43:04.428601 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:43:04.428621 | orchestrator | Saturday 07 March 2026 00:43:00 +0000 (0:00:00.302) 0:00:34.080 ******** 2026-03-07 00:43:04.428628 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-07 00:43:04.428635 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-07 00:43:04.428641 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-07 00:43:04.428647 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-07 00:43:04.428654 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-07 00:43:04.428660 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-07 00:43:04.428666 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-07 00:43:04.428673 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-07 00:43:04.428686 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-07 00:43:04.428693 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-07 00:43:04.428700 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-07 00:43:04.428722 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-07 00:43:04.428729 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-07 00:43:04.428735 | orchestrator | 2026-03-07 00:43:04.428742 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:43:04.428748 | orchestrator | Saturday 07 March 2026 00:43:01 +0000 (0:00:00.358) 0:00:34.438 ******** 2026-03-07 00:43:04.428755 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:43:04.428761 | orchestrator | 2026-03-07 00:43:04.428767 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:43:04.428774 | orchestrator | Saturday 07 March 2026 00:43:01 +0000 (0:00:00.170) 0:00:34.609 ******** 2026-03-07 00:43:04.428780 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:43:04.428787 | orchestrator | 2026-03-07 00:43:04.428793 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:43:04.428800 | orchestrator | Saturday 07 March 2026 00:43:01 +0000 (0:00:00.189) 0:00:34.799 ******** 2026-03-07 00:43:04.428810 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:43:04.428817 | orchestrator | 2026-03-07 00:43:04.428824 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:43:04.428830 | orchestrator | Saturday 07 March 2026 00:43:01 +0000 (0:00:00.202) 0:00:35.001 ******** 2026-03-07 00:43:04.428837 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:43:04.428843 | orchestrator | 2026-03-07 00:43:04.428850 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:43:04.428856 | orchestrator | Saturday 07 March 2026 00:43:01 +0000 (0:00:00.182) 0:00:35.183 ******** 2026-03-07 00:43:04.428863 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:43:04.428869 | orchestrator | 2026-03-07 00:43:04.428876 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:43:04.428882 | orchestrator | Saturday 07 March 2026 00:43:02 +0000 (0:00:00.160) 0:00:35.344 ******** 2026-03-07 00:43:04.428889 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:43:04.428895 | orchestrator | 2026-03-07 00:43:04.428902 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:43:04.428908 | orchestrator | Saturday 07 March 2026 00:43:02 +0000 (0:00:00.527) 0:00:35.872 ******** 2026-03-07 00:43:04.428914 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:43:04.428921 | orchestrator | 2026-03-07 00:43:04.428927 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:43:04.428934 | orchestrator | Saturday 07 March 2026 00:43:02 +0000 (0:00:00.220) 0:00:36.092 ******** 2026-03-07 00:43:04.428940 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:43:04.428946 | orchestrator | 2026-03-07 00:43:04.428953 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:43:04.428959 | orchestrator | Saturday 07 March 2026 00:43:03 +0000 (0:00:00.194) 0:00:36.286 ******** 2026-03-07 00:43:04.428966 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-07 00:43:04.428973 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-07 00:43:04.428980 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-07 00:43:04.428986 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-07 00:43:04.428993 | orchestrator | 2026-03-07 00:43:04.429000 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:43:04.429006 | orchestrator | Saturday 07 March 2026 00:43:03 +0000 (0:00:00.589) 0:00:36.876 ******** 2026-03-07 00:43:04.429012 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:43:04.429018 | orchestrator | 2026-03-07 00:43:04.429029 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:43:04.429036 | orchestrator | Saturday 07 March 2026 00:43:03 +0000 (0:00:00.187) 0:00:37.064 ******** 2026-03-07 00:43:04.429042 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:43:04.429048 | orchestrator | 2026-03-07 00:43:04.429054 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:43:04.429074 | orchestrator | Saturday 07 March 2026 00:43:04 +0000 (0:00:00.165) 0:00:37.229 ******** 2026-03-07 00:43:04.429080 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:43:04.429086 | orchestrator | 2026-03-07 00:43:04.429092 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:43:04.429097 | orchestrator | Saturday 07 March 2026 00:43:04 +0000 (0:00:00.191) 0:00:37.421 ******** 2026-03-07 00:43:04.429104 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:43:04.429111 | orchestrator | 2026-03-07 00:43:04.429122 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-07 00:43:08.251296 | orchestrator | Saturday 07 March 2026 00:43:04 +0000 (0:00:00.206) 0:00:37.627 ******** 2026-03-07 00:43:08.251405 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-03-07 00:43:08.251415 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-03-07 00:43:08.251423 | orchestrator | 2026-03-07 00:43:08.251431 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-07 00:43:08.251439 | orchestrator | Saturday 07 March 2026 00:43:04 +0000 (0:00:00.161) 0:00:37.789 ******** 2026-03-07 00:43:08.251446 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:43:08.251454 | orchestrator | 2026-03-07 00:43:08.251461 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-07 00:43:08.251469 | orchestrator | Saturday 07 March 2026 00:43:04 +0000 (0:00:00.126) 0:00:37.916 ******** 2026-03-07 00:43:08.251514 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:43:08.251523 | orchestrator | 2026-03-07 00:43:08.251530 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-07 00:43:08.251537 | orchestrator | Saturday 07 March 2026 00:43:04 +0000 (0:00:00.129) 0:00:38.046 ******** 2026-03-07 00:43:08.251544 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:43:08.251552 | orchestrator | 2026-03-07 00:43:08.251559 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-07 00:43:08.251566 | orchestrator | Saturday 07 March 2026 00:43:05 +0000 (0:00:00.316) 0:00:38.362 ******** 2026-03-07 00:43:08.251574 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:43:08.251582 | orchestrator | 2026-03-07 00:43:08.251589 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-07 00:43:08.251596 | orchestrator | Saturday 07 March 2026 00:43:05 +0000 (0:00:00.135) 0:00:38.497 ******** 2026-03-07 00:43:08.251604 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6dc70d00-a24c-54e3-88f7-ca23e2f9592d'}}) 2026-03-07 00:43:08.251612 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '3960461f-aa79-5447-98f8-9395cd95d2e3'}}) 2026-03-07 00:43:08.251619 | orchestrator | 2026-03-07 00:43:08.251626 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-07 00:43:08.251633 | orchestrator | Saturday 07 March 2026 00:43:05 +0000 (0:00:00.152) 0:00:38.650 ******** 2026-03-07 00:43:08.251641 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6dc70d00-a24c-54e3-88f7-ca23e2f9592d'}})  2026-03-07 00:43:08.251650 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '3960461f-aa79-5447-98f8-9395cd95d2e3'}})  2026-03-07 00:43:08.251657 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:43:08.251664 | orchestrator | 2026-03-07 00:43:08.251671 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-07 00:43:08.251679 | orchestrator | Saturday 07 March 2026 00:43:05 +0000 (0:00:00.135) 0:00:38.785 ******** 2026-03-07 00:43:08.251686 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6dc70d00-a24c-54e3-88f7-ca23e2f9592d'}})  2026-03-07 00:43:08.251716 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '3960461f-aa79-5447-98f8-9395cd95d2e3'}})  2026-03-07 00:43:08.251723 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:43:08.251730 | orchestrator | 2026-03-07 00:43:08.251738 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-07 00:43:08.251745 | orchestrator | Saturday 07 March 2026 00:43:05 +0000 (0:00:00.150) 0:00:38.936 ******** 2026-03-07 00:43:08.251752 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6dc70d00-a24c-54e3-88f7-ca23e2f9592d'}})  2026-03-07 00:43:08.251759 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '3960461f-aa79-5447-98f8-9395cd95d2e3'}})  2026-03-07 00:43:08.251766 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:43:08.251773 | orchestrator | 2026-03-07 00:43:08.251780 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-07 00:43:08.251787 | orchestrator | Saturday 07 March 2026 00:43:05 +0000 (0:00:00.120) 0:00:39.056 ******** 2026-03-07 00:43:08.251795 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:43:08.251802 | orchestrator | 2026-03-07 00:43:08.251810 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-07 00:43:08.251817 | orchestrator | Saturday 07 March 2026 00:43:05 +0000 (0:00:00.127) 0:00:39.184 ******** 2026-03-07 00:43:08.251824 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:43:08.251831 | orchestrator | 2026-03-07 00:43:08.251853 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-07 00:43:08.251860 | orchestrator | Saturday 07 March 2026 00:43:06 +0000 (0:00:00.135) 0:00:39.320 ******** 2026-03-07 00:43:08.251867 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:43:08.251875 | orchestrator | 2026-03-07 00:43:08.251882 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-07 00:43:08.251889 | orchestrator | Saturday 07 March 2026 00:43:06 +0000 (0:00:00.119) 0:00:39.440 ******** 2026-03-07 00:43:08.251896 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:43:08.251903 | orchestrator | 2026-03-07 00:43:08.251911 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-07 00:43:08.251918 | orchestrator | Saturday 07 March 2026 00:43:06 +0000 (0:00:00.129) 0:00:39.569 ******** 2026-03-07 00:43:08.251925 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:43:08.251932 | orchestrator | 2026-03-07 00:43:08.251940 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-07 00:43:08.251947 | orchestrator | Saturday 07 March 2026 00:43:06 +0000 (0:00:00.115) 0:00:39.685 ******** 2026-03-07 00:43:08.251954 | orchestrator | ok: [testbed-node-5] => { 2026-03-07 00:43:08.251962 | orchestrator |  "ceph_osd_devices": { 2026-03-07 00:43:08.251969 | orchestrator |  "sdb": { 2026-03-07 00:43:08.251991 | orchestrator |  "osd_lvm_uuid": "6dc70d00-a24c-54e3-88f7-ca23e2f9592d" 2026-03-07 00:43:08.251998 | orchestrator |  }, 2026-03-07 00:43:08.252006 | orchestrator |  "sdc": { 2026-03-07 00:43:08.252012 | orchestrator |  "osd_lvm_uuid": "3960461f-aa79-5447-98f8-9395cd95d2e3" 2026-03-07 00:43:08.252019 | orchestrator |  } 2026-03-07 00:43:08.252027 | orchestrator |  } 2026-03-07 00:43:08.252034 | orchestrator | } 2026-03-07 00:43:08.252041 | orchestrator | 2026-03-07 00:43:08.252048 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-07 00:43:08.252055 | orchestrator | Saturday 07 March 2026 00:43:06 +0000 (0:00:00.134) 0:00:39.820 ******** 2026-03-07 00:43:08.252062 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:43:08.252069 | orchestrator | 2026-03-07 00:43:08.252076 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-07 00:43:08.252083 | orchestrator | Saturday 07 March 2026 00:43:06 +0000 (0:00:00.270) 0:00:40.091 ******** 2026-03-07 00:43:08.252107 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:43:08.252119 | orchestrator | 2026-03-07 00:43:08.252126 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-07 00:43:08.252132 | orchestrator | Saturday 07 March 2026 00:43:07 +0000 (0:00:00.127) 0:00:40.218 ******** 2026-03-07 00:43:08.252139 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:43:08.252144 | orchestrator | 2026-03-07 00:43:08.252150 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-07 00:43:08.252156 | orchestrator | Saturday 07 March 2026 00:43:07 +0000 (0:00:00.126) 0:00:40.345 ******** 2026-03-07 00:43:08.252162 | orchestrator | changed: [testbed-node-5] => { 2026-03-07 00:43:08.252168 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-07 00:43:08.252175 | orchestrator |  "ceph_osd_devices": { 2026-03-07 00:43:08.252181 | orchestrator |  "sdb": { 2026-03-07 00:43:08.252188 | orchestrator |  "osd_lvm_uuid": "6dc70d00-a24c-54e3-88f7-ca23e2f9592d" 2026-03-07 00:43:08.252194 | orchestrator |  }, 2026-03-07 00:43:08.252201 | orchestrator |  "sdc": { 2026-03-07 00:43:08.252207 | orchestrator |  "osd_lvm_uuid": "3960461f-aa79-5447-98f8-9395cd95d2e3" 2026-03-07 00:43:08.252214 | orchestrator |  } 2026-03-07 00:43:08.252220 | orchestrator |  }, 2026-03-07 00:43:08.252227 | orchestrator |  "lvm_volumes": [ 2026-03-07 00:43:08.252233 | orchestrator |  { 2026-03-07 00:43:08.252240 | orchestrator |  "data": "osd-block-6dc70d00-a24c-54e3-88f7-ca23e2f9592d", 2026-03-07 00:43:08.252246 | orchestrator |  "data_vg": "ceph-6dc70d00-a24c-54e3-88f7-ca23e2f9592d" 2026-03-07 00:43:08.252253 | orchestrator |  }, 2026-03-07 00:43:08.252259 | orchestrator |  { 2026-03-07 00:43:08.252266 | orchestrator |  "data": "osd-block-3960461f-aa79-5447-98f8-9395cd95d2e3", 2026-03-07 00:43:08.252276 | orchestrator |  "data_vg": "ceph-3960461f-aa79-5447-98f8-9395cd95d2e3" 2026-03-07 00:43:08.252282 | orchestrator |  } 2026-03-07 00:43:08.252288 | orchestrator |  ] 2026-03-07 00:43:08.252298 | orchestrator |  } 2026-03-07 00:43:08.252304 | orchestrator | } 2026-03-07 00:43:08.252310 | orchestrator | 2026-03-07 00:43:08.252316 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-07 00:43:08.252322 | orchestrator | Saturday 07 March 2026 00:43:07 +0000 (0:00:00.188) 0:00:40.533 ******** 2026-03-07 00:43:08.252328 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-07 00:43:08.252334 | orchestrator | 2026-03-07 00:43:08.252340 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:43:08.252347 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-07 00:43:08.252355 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-07 00:43:08.252361 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-07 00:43:08.252367 | orchestrator | 2026-03-07 00:43:08.252374 | orchestrator | 2026-03-07 00:43:08.252381 | orchestrator | 2026-03-07 00:43:08.252387 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:43:08.252394 | orchestrator | Saturday 07 March 2026 00:43:08 +0000 (0:00:00.898) 0:00:41.432 ******** 2026-03-07 00:43:08.252400 | orchestrator | =============================================================================== 2026-03-07 00:43:08.252406 | orchestrator | Write configuration file ------------------------------------------------ 3.66s 2026-03-07 00:43:08.252412 | orchestrator | Add known partitions to the list of available block devices ------------- 1.43s 2026-03-07 00:43:08.252419 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.36s 2026-03-07 00:43:08.252425 | orchestrator | Add known links to the list of available block devices ------------------ 1.22s 2026-03-07 00:43:08.252437 | orchestrator | Add known partitions to the list of available block devices ------------- 1.20s 2026-03-07 00:43:08.252443 | orchestrator | Print configuration data ------------------------------------------------ 1.02s 2026-03-07 00:43:08.252449 | orchestrator | Add known links to the list of available block devices ------------------ 0.82s 2026-03-07 00:43:08.252454 | orchestrator | Add known links to the list of available block devices ------------------ 0.75s 2026-03-07 00:43:08.252461 | orchestrator | Get initial list of available block devices ----------------------------- 0.75s 2026-03-07 00:43:08.252467 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.68s 2026-03-07 00:43:08.252473 | orchestrator | Add known links to the list of available block devices ------------------ 0.66s 2026-03-07 00:43:08.252479 | orchestrator | Add known partitions to the list of available block devices ------------- 0.63s 2026-03-07 00:43:08.252486 | orchestrator | Set DB devices config data ---------------------------------------------- 0.63s 2026-03-07 00:43:08.252498 | orchestrator | Add known links to the list of available block devices ------------------ 0.62s 2026-03-07 00:43:08.505687 | orchestrator | Generate shared DB/WAL VG names ----------------------------------------- 0.61s 2026-03-07 00:43:08.505814 | orchestrator | Add known partitions to the list of available block devices ------------- 0.59s 2026-03-07 00:43:08.505839 | orchestrator | Add known partitions to the list of available block devices ------------- 0.59s 2026-03-07 00:43:08.505858 | orchestrator | Add known partitions to the list of available block devices ------------- 0.57s 2026-03-07 00:43:08.505877 | orchestrator | Print WAL devices ------------------------------------------------------- 0.56s 2026-03-07 00:43:08.505896 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.56s 2026-03-07 00:43:30.792663 | orchestrator | 2026-03-07 00:43:30 | INFO  | Task 6b52e4ea-4c7a-4f23-90fd-88ac06f41e01 (sync inventory) is running in background. Output coming soon. 2026-03-07 00:43:59.923035 | orchestrator | 2026-03-07 00:43:32 | INFO  | Starting group_vars file reorganization 2026-03-07 00:43:59.923218 | orchestrator | 2026-03-07 00:43:32 | INFO  | Moved 0 file(s) to their respective directories 2026-03-07 00:43:59.923237 | orchestrator | 2026-03-07 00:43:32 | INFO  | Group_vars file reorganization completed 2026-03-07 00:43:59.923249 | orchestrator | 2026-03-07 00:43:35 | INFO  | Starting variable preparation from inventory 2026-03-07 00:43:59.923260 | orchestrator | 2026-03-07 00:43:38 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-03-07 00:43:59.923272 | orchestrator | 2026-03-07 00:43:38 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-03-07 00:43:59.923283 | orchestrator | 2026-03-07 00:43:38 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-03-07 00:43:59.923294 | orchestrator | 2026-03-07 00:43:38 | INFO  | 3 file(s) written, 6 host(s) processed 2026-03-07 00:43:59.923306 | orchestrator | 2026-03-07 00:43:38 | INFO  | Variable preparation completed 2026-03-07 00:43:59.923318 | orchestrator | 2026-03-07 00:43:39 | INFO  | Starting inventory overwrite handling 2026-03-07 00:43:59.923329 | orchestrator | 2026-03-07 00:43:39 | INFO  | Handling group overwrites in 99-overwrite 2026-03-07 00:43:59.923341 | orchestrator | 2026-03-07 00:43:39 | INFO  | Removing group frr:children from 60-generic 2026-03-07 00:43:59.923352 | orchestrator | 2026-03-07 00:43:39 | INFO  | Removing group netbird:children from 50-infrastructure 2026-03-07 00:43:59.923384 | orchestrator | 2026-03-07 00:43:39 | INFO  | Removing group ceph-rgw from 50-ceph 2026-03-07 00:43:59.923396 | orchestrator | 2026-03-07 00:43:39 | INFO  | Removing group ceph-mds from 50-ceph 2026-03-07 00:43:59.923407 | orchestrator | 2026-03-07 00:43:39 | INFO  | Handling group overwrites in 20-roles 2026-03-07 00:43:59.923418 | orchestrator | 2026-03-07 00:43:39 | INFO  | Removing group k3s_node from 50-infrastructure 2026-03-07 00:43:59.923480 | orchestrator | 2026-03-07 00:43:39 | INFO  | Removed 5 group(s) in total 2026-03-07 00:43:59.923493 | orchestrator | 2026-03-07 00:43:39 | INFO  | Inventory overwrite handling completed 2026-03-07 00:43:59.923504 | orchestrator | 2026-03-07 00:43:40 | INFO  | Starting merge of inventory files 2026-03-07 00:43:59.923515 | orchestrator | 2026-03-07 00:43:40 | INFO  | Inventory files merged successfully 2026-03-07 00:43:59.923526 | orchestrator | 2026-03-07 00:43:46 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-03-07 00:43:59.923536 | orchestrator | 2026-03-07 00:43:58 | INFO  | Successfully wrote ClusterShell configuration 2026-03-07 00:43:59.923550 | orchestrator | [master a89966c] 2026-03-07-00-43 2026-03-07 00:43:59.923565 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-03-07 00:44:02.064107 | orchestrator | 2026-03-07 00:44:02 | INFO  | Task 739f6467-096f-4f9b-a4f4-fa0312d03522 (ceph-create-lvm-devices) was prepared for execution. 2026-03-07 00:44:02.064213 | orchestrator | 2026-03-07 00:44:02 | INFO  | It takes a moment until task 739f6467-096f-4f9b-a4f4-fa0312d03522 (ceph-create-lvm-devices) has been started and output is visible here. 2026-03-07 00:44:14.359322 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-07 00:44:14.359420 | orchestrator | 2.16.14 2026-03-07 00:44:14.359431 | orchestrator | 2026-03-07 00:44:14.359439 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-07 00:44:14.359446 | orchestrator | 2026-03-07 00:44:14.359453 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-07 00:44:14.359460 | orchestrator | Saturday 07 March 2026 00:44:06 +0000 (0:00:00.286) 0:00:00.286 ******** 2026-03-07 00:44:14.359467 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-07 00:44:14.359473 | orchestrator | 2026-03-07 00:44:14.359480 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-07 00:44:14.359486 | orchestrator | Saturday 07 March 2026 00:44:07 +0000 (0:00:00.240) 0:00:00.526 ******** 2026-03-07 00:44:14.359492 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:44:14.359499 | orchestrator | 2026-03-07 00:44:14.359505 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:44:14.359512 | orchestrator | Saturday 07 March 2026 00:44:07 +0000 (0:00:00.198) 0:00:00.725 ******** 2026-03-07 00:44:14.359519 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-07 00:44:14.359525 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-07 00:44:14.359531 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-07 00:44:14.359618 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-07 00:44:14.359627 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-07 00:44:14.359633 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-07 00:44:14.359639 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-07 00:44:14.359646 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-07 00:44:14.359652 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-07 00:44:14.359658 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-07 00:44:14.359665 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-07 00:44:14.359671 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-07 00:44:14.359677 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-07 00:44:14.359701 | orchestrator | 2026-03-07 00:44:14.359708 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:44:14.359714 | orchestrator | Saturday 07 March 2026 00:44:07 +0000 (0:00:00.455) 0:00:01.180 ******** 2026-03-07 00:44:14.359720 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:44:14.359727 | orchestrator | 2026-03-07 00:44:14.359733 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:44:14.359739 | orchestrator | Saturday 07 March 2026 00:44:07 +0000 (0:00:00.206) 0:00:01.387 ******** 2026-03-07 00:44:14.359745 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:44:14.359751 | orchestrator | 2026-03-07 00:44:14.359757 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:44:14.359764 | orchestrator | Saturday 07 March 2026 00:44:08 +0000 (0:00:00.195) 0:00:01.582 ******** 2026-03-07 00:44:14.359770 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:44:14.359776 | orchestrator | 2026-03-07 00:44:14.359782 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:44:14.359789 | orchestrator | Saturday 07 March 2026 00:44:08 +0000 (0:00:00.190) 0:00:01.773 ******** 2026-03-07 00:44:14.359795 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:44:14.359801 | orchestrator | 2026-03-07 00:44:14.359808 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:44:14.359814 | orchestrator | Saturday 07 March 2026 00:44:08 +0000 (0:00:00.189) 0:00:01.962 ******** 2026-03-07 00:44:14.359820 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:44:14.359826 | orchestrator | 2026-03-07 00:44:14.359832 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:44:14.359838 | orchestrator | Saturday 07 March 2026 00:44:08 +0000 (0:00:00.210) 0:00:02.172 ******** 2026-03-07 00:44:14.359845 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:44:14.359852 | orchestrator | 2026-03-07 00:44:14.359859 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:44:14.359867 | orchestrator | Saturday 07 March 2026 00:44:08 +0000 (0:00:00.183) 0:00:02.356 ******** 2026-03-07 00:44:14.359874 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:44:14.359881 | orchestrator | 2026-03-07 00:44:14.359888 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:44:14.359895 | orchestrator | Saturday 07 March 2026 00:44:09 +0000 (0:00:00.202) 0:00:02.559 ******** 2026-03-07 00:44:14.359902 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:44:14.359909 | orchestrator | 2026-03-07 00:44:14.359916 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:44:14.359924 | orchestrator | Saturday 07 March 2026 00:44:09 +0000 (0:00:00.202) 0:00:02.761 ******** 2026-03-07 00:44:14.359931 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_9e6318f9-11cb-4ed8-b0fb-e89153e65f2e) 2026-03-07 00:44:14.359939 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_9e6318f9-11cb-4ed8-b0fb-e89153e65f2e) 2026-03-07 00:44:14.359946 | orchestrator | 2026-03-07 00:44:14.359954 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:44:14.359974 | orchestrator | Saturday 07 March 2026 00:44:09 +0000 (0:00:00.406) 0:00:03.168 ******** 2026-03-07 00:44:14.359982 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d799a894-5671-421e-939f-d4a49d05b62b) 2026-03-07 00:44:14.359989 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d799a894-5671-421e-939f-d4a49d05b62b) 2026-03-07 00:44:14.359997 | orchestrator | 2026-03-07 00:44:14.360004 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:44:14.360012 | orchestrator | Saturday 07 March 2026 00:44:10 +0000 (0:00:00.536) 0:00:03.705 ******** 2026-03-07 00:44:14.360019 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_beed34ce-a5a1-4e0c-b446-348e6964ce68) 2026-03-07 00:44:14.360026 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_beed34ce-a5a1-4e0c-b446-348e6964ce68) 2026-03-07 00:44:14.360038 | orchestrator | 2026-03-07 00:44:14.360048 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:44:14.360059 | orchestrator | Saturday 07 March 2026 00:44:10 +0000 (0:00:00.675) 0:00:04.380 ******** 2026-03-07 00:44:14.360070 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_99318194-5870-4346-ba42-ca8c5b557f89) 2026-03-07 00:44:14.360081 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_99318194-5870-4346-ba42-ca8c5b557f89) 2026-03-07 00:44:14.360091 | orchestrator | 2026-03-07 00:44:14.360102 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:44:14.360112 | orchestrator | Saturday 07 March 2026 00:44:11 +0000 (0:00:00.922) 0:00:05.302 ******** 2026-03-07 00:44:14.360121 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-07 00:44:14.360131 | orchestrator | 2026-03-07 00:44:14.360140 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:44:14.360150 | orchestrator | Saturday 07 March 2026 00:44:12 +0000 (0:00:00.371) 0:00:05.674 ******** 2026-03-07 00:44:14.360158 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-07 00:44:14.360168 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-07 00:44:14.360177 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-07 00:44:14.360186 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-07 00:44:14.360196 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-07 00:44:14.360205 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-07 00:44:14.360215 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-07 00:44:14.360225 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-07 00:44:14.360235 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-07 00:44:14.360245 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-07 00:44:14.360255 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-07 00:44:14.360286 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-07 00:44:14.360293 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-07 00:44:14.360299 | orchestrator | 2026-03-07 00:44:14.360308 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:44:14.360318 | orchestrator | Saturday 07 March 2026 00:44:12 +0000 (0:00:00.544) 0:00:06.219 ******** 2026-03-07 00:44:14.360328 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:44:14.360338 | orchestrator | 2026-03-07 00:44:14.360348 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:44:14.360359 | orchestrator | Saturday 07 March 2026 00:44:12 +0000 (0:00:00.281) 0:00:06.500 ******** 2026-03-07 00:44:14.360370 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:44:14.360380 | orchestrator | 2026-03-07 00:44:14.360391 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:44:14.360401 | orchestrator | Saturday 07 March 2026 00:44:13 +0000 (0:00:00.249) 0:00:06.750 ******** 2026-03-07 00:44:14.360411 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:44:14.360421 | orchestrator | 2026-03-07 00:44:14.360432 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:44:14.360438 | orchestrator | Saturday 07 March 2026 00:44:13 +0000 (0:00:00.241) 0:00:06.992 ******** 2026-03-07 00:44:14.360444 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:44:14.360458 | orchestrator | 2026-03-07 00:44:14.360464 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:44:14.360470 | orchestrator | Saturday 07 March 2026 00:44:13 +0000 (0:00:00.233) 0:00:07.225 ******** 2026-03-07 00:44:14.360476 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:44:14.360482 | orchestrator | 2026-03-07 00:44:14.360488 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:44:14.360495 | orchestrator | Saturday 07 March 2026 00:44:13 +0000 (0:00:00.217) 0:00:07.443 ******** 2026-03-07 00:44:14.360501 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:44:14.360507 | orchestrator | 2026-03-07 00:44:14.360513 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:44:14.360519 | orchestrator | Saturday 07 March 2026 00:44:14 +0000 (0:00:00.207) 0:00:07.650 ******** 2026-03-07 00:44:14.360525 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:44:14.360531 | orchestrator | 2026-03-07 00:44:14.360565 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:44:22.957753 | orchestrator | Saturday 07 March 2026 00:44:14 +0000 (0:00:00.214) 0:00:07.865 ******** 2026-03-07 00:44:22.957899 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:44:22.957928 | orchestrator | 2026-03-07 00:44:22.957950 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:44:22.957971 | orchestrator | Saturday 07 March 2026 00:44:14 +0000 (0:00:00.203) 0:00:08.068 ******** 2026-03-07 00:44:22.957990 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-07 00:44:22.958007 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-07 00:44:22.958080 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-07 00:44:22.958094 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-07 00:44:22.958105 | orchestrator | 2026-03-07 00:44:22.958117 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:44:22.958128 | orchestrator | Saturday 07 March 2026 00:44:15 +0000 (0:00:01.179) 0:00:09.248 ******** 2026-03-07 00:44:22.958139 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:44:22.958150 | orchestrator | 2026-03-07 00:44:22.958161 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:44:22.958172 | orchestrator | Saturday 07 March 2026 00:44:15 +0000 (0:00:00.245) 0:00:09.493 ******** 2026-03-07 00:44:22.958183 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:44:22.958194 | orchestrator | 2026-03-07 00:44:22.958206 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:44:22.958219 | orchestrator | Saturday 07 March 2026 00:44:16 +0000 (0:00:00.237) 0:00:09.731 ******** 2026-03-07 00:44:22.958233 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:44:22.958246 | orchestrator | 2026-03-07 00:44:22.958258 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:44:22.958272 | orchestrator | Saturday 07 March 2026 00:44:16 +0000 (0:00:00.219) 0:00:09.950 ******** 2026-03-07 00:44:22.958284 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:44:22.958297 | orchestrator | 2026-03-07 00:44:22.958309 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-07 00:44:22.958322 | orchestrator | Saturday 07 March 2026 00:44:16 +0000 (0:00:00.243) 0:00:10.194 ******** 2026-03-07 00:44:22.958335 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:44:22.958347 | orchestrator | 2026-03-07 00:44:22.958360 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-07 00:44:22.958372 | orchestrator | Saturday 07 March 2026 00:44:16 +0000 (0:00:00.184) 0:00:10.378 ******** 2026-03-07 00:44:22.958385 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3529c73b-8337-5a09-bb85-f9958b3a6115'}}) 2026-03-07 00:44:22.958399 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5644fa9a-696a-5a4b-ae2f-cbc58e712aba'}}) 2026-03-07 00:44:22.958411 | orchestrator | 2026-03-07 00:44:22.958424 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-07 00:44:22.958464 | orchestrator | Saturday 07 March 2026 00:44:17 +0000 (0:00:00.235) 0:00:10.614 ******** 2026-03-07 00:44:22.958478 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-3529c73b-8337-5a09-bb85-f9958b3a6115', 'data_vg': 'ceph-3529c73b-8337-5a09-bb85-f9958b3a6115'}) 2026-03-07 00:44:22.958492 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-5644fa9a-696a-5a4b-ae2f-cbc58e712aba', 'data_vg': 'ceph-5644fa9a-696a-5a4b-ae2f-cbc58e712aba'}) 2026-03-07 00:44:22.958505 | orchestrator | 2026-03-07 00:44:22.958518 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-07 00:44:22.958548 | orchestrator | Saturday 07 March 2026 00:44:19 +0000 (0:00:02.073) 0:00:12.687 ******** 2026-03-07 00:44:22.958561 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3529c73b-8337-5a09-bb85-f9958b3a6115', 'data_vg': 'ceph-3529c73b-8337-5a09-bb85-f9958b3a6115'})  2026-03-07 00:44:22.958576 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5644fa9a-696a-5a4b-ae2f-cbc58e712aba', 'data_vg': 'ceph-5644fa9a-696a-5a4b-ae2f-cbc58e712aba'})  2026-03-07 00:44:22.958587 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:44:22.958628 | orchestrator | 2026-03-07 00:44:22.958640 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-07 00:44:22.958650 | orchestrator | Saturday 07 March 2026 00:44:19 +0000 (0:00:00.196) 0:00:12.884 ******** 2026-03-07 00:44:22.958661 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-3529c73b-8337-5a09-bb85-f9958b3a6115', 'data_vg': 'ceph-3529c73b-8337-5a09-bb85-f9958b3a6115'}) 2026-03-07 00:44:22.958672 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-5644fa9a-696a-5a4b-ae2f-cbc58e712aba', 'data_vg': 'ceph-5644fa9a-696a-5a4b-ae2f-cbc58e712aba'}) 2026-03-07 00:44:22.958683 | orchestrator | 2026-03-07 00:44:22.958694 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-07 00:44:22.958706 | orchestrator | Saturday 07 March 2026 00:44:20 +0000 (0:00:01.470) 0:00:14.354 ******** 2026-03-07 00:44:22.958717 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3529c73b-8337-5a09-bb85-f9958b3a6115', 'data_vg': 'ceph-3529c73b-8337-5a09-bb85-f9958b3a6115'})  2026-03-07 00:44:22.958728 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5644fa9a-696a-5a4b-ae2f-cbc58e712aba', 'data_vg': 'ceph-5644fa9a-696a-5a4b-ae2f-cbc58e712aba'})  2026-03-07 00:44:22.958739 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:44:22.958750 | orchestrator | 2026-03-07 00:44:22.958761 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-07 00:44:22.958772 | orchestrator | Saturday 07 March 2026 00:44:21 +0000 (0:00:00.166) 0:00:14.521 ******** 2026-03-07 00:44:22.958803 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:44:22.958815 | orchestrator | 2026-03-07 00:44:22.958826 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-07 00:44:22.958837 | orchestrator | Saturday 07 March 2026 00:44:21 +0000 (0:00:00.172) 0:00:14.694 ******** 2026-03-07 00:44:22.958848 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3529c73b-8337-5a09-bb85-f9958b3a6115', 'data_vg': 'ceph-3529c73b-8337-5a09-bb85-f9958b3a6115'})  2026-03-07 00:44:22.958867 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5644fa9a-696a-5a4b-ae2f-cbc58e712aba', 'data_vg': 'ceph-5644fa9a-696a-5a4b-ae2f-cbc58e712aba'})  2026-03-07 00:44:22.958885 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:44:22.958903 | orchestrator | 2026-03-07 00:44:22.958921 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-07 00:44:22.958940 | orchestrator | Saturday 07 March 2026 00:44:21 +0000 (0:00:00.386) 0:00:15.081 ******** 2026-03-07 00:44:22.958959 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:44:22.958977 | orchestrator | 2026-03-07 00:44:22.958993 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-07 00:44:22.959004 | orchestrator | Saturday 07 March 2026 00:44:21 +0000 (0:00:00.156) 0:00:15.237 ******** 2026-03-07 00:44:22.959024 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3529c73b-8337-5a09-bb85-f9958b3a6115', 'data_vg': 'ceph-3529c73b-8337-5a09-bb85-f9958b3a6115'})  2026-03-07 00:44:22.959035 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5644fa9a-696a-5a4b-ae2f-cbc58e712aba', 'data_vg': 'ceph-5644fa9a-696a-5a4b-ae2f-cbc58e712aba'})  2026-03-07 00:44:22.959046 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:44:22.959057 | orchestrator | 2026-03-07 00:44:22.959067 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-07 00:44:22.959078 | orchestrator | Saturday 07 March 2026 00:44:21 +0000 (0:00:00.150) 0:00:15.388 ******** 2026-03-07 00:44:22.959089 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:44:22.959100 | orchestrator | 2026-03-07 00:44:22.959110 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-07 00:44:22.959121 | orchestrator | Saturday 07 March 2026 00:44:22 +0000 (0:00:00.137) 0:00:15.525 ******** 2026-03-07 00:44:22.959132 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3529c73b-8337-5a09-bb85-f9958b3a6115', 'data_vg': 'ceph-3529c73b-8337-5a09-bb85-f9958b3a6115'})  2026-03-07 00:44:22.959143 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5644fa9a-696a-5a4b-ae2f-cbc58e712aba', 'data_vg': 'ceph-5644fa9a-696a-5a4b-ae2f-cbc58e712aba'})  2026-03-07 00:44:22.959154 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:44:22.959165 | orchestrator | 2026-03-07 00:44:22.959175 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-07 00:44:22.959186 | orchestrator | Saturday 07 March 2026 00:44:22 +0000 (0:00:00.162) 0:00:15.687 ******** 2026-03-07 00:44:22.959197 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:44:22.959208 | orchestrator | 2026-03-07 00:44:22.959219 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-07 00:44:22.959229 | orchestrator | Saturday 07 March 2026 00:44:22 +0000 (0:00:00.157) 0:00:15.845 ******** 2026-03-07 00:44:22.959241 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3529c73b-8337-5a09-bb85-f9958b3a6115', 'data_vg': 'ceph-3529c73b-8337-5a09-bb85-f9958b3a6115'})  2026-03-07 00:44:22.959252 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5644fa9a-696a-5a4b-ae2f-cbc58e712aba', 'data_vg': 'ceph-5644fa9a-696a-5a4b-ae2f-cbc58e712aba'})  2026-03-07 00:44:22.959262 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:44:22.959273 | orchestrator | 2026-03-07 00:44:22.959284 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-07 00:44:22.959295 | orchestrator | Saturday 07 March 2026 00:44:22 +0000 (0:00:00.152) 0:00:15.998 ******** 2026-03-07 00:44:22.959305 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3529c73b-8337-5a09-bb85-f9958b3a6115', 'data_vg': 'ceph-3529c73b-8337-5a09-bb85-f9958b3a6115'})  2026-03-07 00:44:22.959325 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5644fa9a-696a-5a4b-ae2f-cbc58e712aba', 'data_vg': 'ceph-5644fa9a-696a-5a4b-ae2f-cbc58e712aba'})  2026-03-07 00:44:22.959337 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:44:22.959348 | orchestrator | 2026-03-07 00:44:22.959358 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-07 00:44:22.959369 | orchestrator | Saturday 07 March 2026 00:44:22 +0000 (0:00:00.154) 0:00:16.153 ******** 2026-03-07 00:44:22.959380 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3529c73b-8337-5a09-bb85-f9958b3a6115', 'data_vg': 'ceph-3529c73b-8337-5a09-bb85-f9958b3a6115'})  2026-03-07 00:44:22.959391 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5644fa9a-696a-5a4b-ae2f-cbc58e712aba', 'data_vg': 'ceph-5644fa9a-696a-5a4b-ae2f-cbc58e712aba'})  2026-03-07 00:44:22.959402 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:44:22.959412 | orchestrator | 2026-03-07 00:44:22.959423 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-07 00:44:22.959434 | orchestrator | Saturday 07 March 2026 00:44:22 +0000 (0:00:00.175) 0:00:16.329 ******** 2026-03-07 00:44:22.959451 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:44:22.959462 | orchestrator | 2026-03-07 00:44:22.959472 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-07 00:44:22.959490 | orchestrator | Saturday 07 March 2026 00:44:22 +0000 (0:00:00.138) 0:00:16.467 ******** 2026-03-07 00:44:29.817308 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:44:29.817368 | orchestrator | 2026-03-07 00:44:29.817378 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-07 00:44:29.817387 | orchestrator | Saturday 07 March 2026 00:44:23 +0000 (0:00:00.147) 0:00:16.614 ******** 2026-03-07 00:44:29.817395 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:44:29.817401 | orchestrator | 2026-03-07 00:44:29.817407 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-07 00:44:29.817414 | orchestrator | Saturday 07 March 2026 00:44:23 +0000 (0:00:00.139) 0:00:16.753 ******** 2026-03-07 00:44:29.817420 | orchestrator | ok: [testbed-node-3] => { 2026-03-07 00:44:29.817424 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-07 00:44:29.817428 | orchestrator | } 2026-03-07 00:44:29.817432 | orchestrator | 2026-03-07 00:44:29.817436 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-07 00:44:29.817440 | orchestrator | Saturday 07 March 2026 00:44:23 +0000 (0:00:00.358) 0:00:17.112 ******** 2026-03-07 00:44:29.817444 | orchestrator | ok: [testbed-node-3] => { 2026-03-07 00:44:29.817448 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-07 00:44:29.817452 | orchestrator | } 2026-03-07 00:44:29.817456 | orchestrator | 2026-03-07 00:44:29.817461 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-07 00:44:29.817469 | orchestrator | Saturday 07 March 2026 00:44:23 +0000 (0:00:00.151) 0:00:17.264 ******** 2026-03-07 00:44:29.817475 | orchestrator | ok: [testbed-node-3] => { 2026-03-07 00:44:29.817482 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-07 00:44:29.817490 | orchestrator | } 2026-03-07 00:44:29.817494 | orchestrator | 2026-03-07 00:44:29.817498 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-07 00:44:29.817502 | orchestrator | Saturday 07 March 2026 00:44:23 +0000 (0:00:00.152) 0:00:17.416 ******** 2026-03-07 00:44:29.817505 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:44:29.817509 | orchestrator | 2026-03-07 00:44:29.817513 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-07 00:44:29.817517 | orchestrator | Saturday 07 March 2026 00:44:24 +0000 (0:00:00.718) 0:00:18.135 ******** 2026-03-07 00:44:29.817520 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:44:29.817524 | orchestrator | 2026-03-07 00:44:29.817528 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-07 00:44:29.817532 | orchestrator | Saturday 07 March 2026 00:44:25 +0000 (0:00:00.527) 0:00:18.662 ******** 2026-03-07 00:44:29.817535 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:44:29.817541 | orchestrator | 2026-03-07 00:44:29.817546 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-07 00:44:29.817550 | orchestrator | Saturday 07 March 2026 00:44:25 +0000 (0:00:00.518) 0:00:19.181 ******** 2026-03-07 00:44:29.817554 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:44:29.817558 | orchestrator | 2026-03-07 00:44:29.817561 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-07 00:44:29.817565 | orchestrator | Saturday 07 March 2026 00:44:25 +0000 (0:00:00.163) 0:00:19.344 ******** 2026-03-07 00:44:29.817569 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:44:29.817573 | orchestrator | 2026-03-07 00:44:29.817577 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-07 00:44:29.817581 | orchestrator | Saturday 07 March 2026 00:44:25 +0000 (0:00:00.108) 0:00:19.452 ******** 2026-03-07 00:44:29.817584 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:44:29.817588 | orchestrator | 2026-03-07 00:44:29.817592 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-07 00:44:29.817607 | orchestrator | Saturday 07 March 2026 00:44:26 +0000 (0:00:00.113) 0:00:19.566 ******** 2026-03-07 00:44:29.817618 | orchestrator | ok: [testbed-node-3] => { 2026-03-07 00:44:29.817622 | orchestrator |  "vgs_report": { 2026-03-07 00:44:29.817626 | orchestrator |  "vg": [] 2026-03-07 00:44:29.817630 | orchestrator |  } 2026-03-07 00:44:29.817664 | orchestrator | } 2026-03-07 00:44:29.817671 | orchestrator | 2026-03-07 00:44:29.817678 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-07 00:44:29.817684 | orchestrator | Saturday 07 March 2026 00:44:26 +0000 (0:00:00.137) 0:00:19.704 ******** 2026-03-07 00:44:29.817690 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:44:29.817697 | orchestrator | 2026-03-07 00:44:29.817703 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-07 00:44:29.817709 | orchestrator | Saturday 07 March 2026 00:44:26 +0000 (0:00:00.139) 0:00:19.844 ******** 2026-03-07 00:44:29.817716 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:44:29.817720 | orchestrator | 2026-03-07 00:44:29.817724 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-07 00:44:29.817727 | orchestrator | Saturday 07 March 2026 00:44:26 +0000 (0:00:00.137) 0:00:19.981 ******** 2026-03-07 00:44:29.817731 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:44:29.817735 | orchestrator | 2026-03-07 00:44:29.817738 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-07 00:44:29.817742 | orchestrator | Saturday 07 March 2026 00:44:26 +0000 (0:00:00.372) 0:00:20.354 ******** 2026-03-07 00:44:29.817746 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:44:29.817750 | orchestrator | 2026-03-07 00:44:29.817754 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-07 00:44:29.817760 | orchestrator | Saturday 07 March 2026 00:44:27 +0000 (0:00:00.165) 0:00:20.520 ******** 2026-03-07 00:44:29.817766 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:44:29.817773 | orchestrator | 2026-03-07 00:44:29.817779 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-07 00:44:29.817785 | orchestrator | Saturday 07 March 2026 00:44:27 +0000 (0:00:00.143) 0:00:20.663 ******** 2026-03-07 00:44:29.817788 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:44:29.817792 | orchestrator | 2026-03-07 00:44:29.817796 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-07 00:44:29.817800 | orchestrator | Saturday 07 March 2026 00:44:27 +0000 (0:00:00.157) 0:00:20.821 ******** 2026-03-07 00:44:29.817803 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:44:29.817807 | orchestrator | 2026-03-07 00:44:29.817811 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-07 00:44:29.817815 | orchestrator | Saturday 07 March 2026 00:44:27 +0000 (0:00:00.142) 0:00:20.964 ******** 2026-03-07 00:44:29.817827 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:44:29.817831 | orchestrator | 2026-03-07 00:44:29.817835 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-07 00:44:29.817839 | orchestrator | Saturday 07 March 2026 00:44:27 +0000 (0:00:00.141) 0:00:21.105 ******** 2026-03-07 00:44:29.817842 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:44:29.817846 | orchestrator | 2026-03-07 00:44:29.817850 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-07 00:44:29.817854 | orchestrator | Saturday 07 March 2026 00:44:27 +0000 (0:00:00.141) 0:00:21.247 ******** 2026-03-07 00:44:29.817857 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:44:29.817861 | orchestrator | 2026-03-07 00:44:29.817865 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-07 00:44:29.817869 | orchestrator | Saturday 07 March 2026 00:44:27 +0000 (0:00:00.140) 0:00:21.387 ******** 2026-03-07 00:44:29.817872 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:44:29.817876 | orchestrator | 2026-03-07 00:44:29.817880 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-07 00:44:29.817884 | orchestrator | Saturday 07 March 2026 00:44:28 +0000 (0:00:00.138) 0:00:21.525 ******** 2026-03-07 00:44:29.817897 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:44:29.817905 | orchestrator | 2026-03-07 00:44:29.817914 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-07 00:44:29.817920 | orchestrator | Saturday 07 March 2026 00:44:28 +0000 (0:00:00.132) 0:00:21.658 ******** 2026-03-07 00:44:29.817926 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:44:29.817934 | orchestrator | 2026-03-07 00:44:29.817942 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-07 00:44:29.817949 | orchestrator | Saturday 07 March 2026 00:44:28 +0000 (0:00:00.133) 0:00:21.791 ******** 2026-03-07 00:44:29.817956 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:44:29.817963 | orchestrator | 2026-03-07 00:44:29.817969 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-07 00:44:29.817976 | orchestrator | Saturday 07 March 2026 00:44:28 +0000 (0:00:00.190) 0:00:21.982 ******** 2026-03-07 00:44:29.817984 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3529c73b-8337-5a09-bb85-f9958b3a6115', 'data_vg': 'ceph-3529c73b-8337-5a09-bb85-f9958b3a6115'})  2026-03-07 00:44:29.817992 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5644fa9a-696a-5a4b-ae2f-cbc58e712aba', 'data_vg': 'ceph-5644fa9a-696a-5a4b-ae2f-cbc58e712aba'})  2026-03-07 00:44:29.817998 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:44:29.818005 | orchestrator | 2026-03-07 00:44:29.818012 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-07 00:44:29.818050 | orchestrator | Saturday 07 March 2026 00:44:28 +0000 (0:00:00.444) 0:00:22.427 ******** 2026-03-07 00:44:29.818058 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3529c73b-8337-5a09-bb85-f9958b3a6115', 'data_vg': 'ceph-3529c73b-8337-5a09-bb85-f9958b3a6115'})  2026-03-07 00:44:29.818065 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5644fa9a-696a-5a4b-ae2f-cbc58e712aba', 'data_vg': 'ceph-5644fa9a-696a-5a4b-ae2f-cbc58e712aba'})  2026-03-07 00:44:29.818073 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:44:29.818080 | orchestrator | 2026-03-07 00:44:29.818087 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-07 00:44:29.818094 | orchestrator | Saturday 07 March 2026 00:44:29 +0000 (0:00:00.147) 0:00:22.574 ******** 2026-03-07 00:44:29.818101 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3529c73b-8337-5a09-bb85-f9958b3a6115', 'data_vg': 'ceph-3529c73b-8337-5a09-bb85-f9958b3a6115'})  2026-03-07 00:44:29.818108 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5644fa9a-696a-5a4b-ae2f-cbc58e712aba', 'data_vg': 'ceph-5644fa9a-696a-5a4b-ae2f-cbc58e712aba'})  2026-03-07 00:44:29.818114 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:44:29.818121 | orchestrator | 2026-03-07 00:44:29.818127 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-07 00:44:29.818134 | orchestrator | Saturday 07 March 2026 00:44:29 +0000 (0:00:00.159) 0:00:22.734 ******** 2026-03-07 00:44:29.818141 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3529c73b-8337-5a09-bb85-f9958b3a6115', 'data_vg': 'ceph-3529c73b-8337-5a09-bb85-f9958b3a6115'})  2026-03-07 00:44:29.818147 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5644fa9a-696a-5a4b-ae2f-cbc58e712aba', 'data_vg': 'ceph-5644fa9a-696a-5a4b-ae2f-cbc58e712aba'})  2026-03-07 00:44:29.818154 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:44:29.818161 | orchestrator | 2026-03-07 00:44:29.818169 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-07 00:44:29.818176 | orchestrator | Saturday 07 March 2026 00:44:29 +0000 (0:00:00.188) 0:00:22.923 ******** 2026-03-07 00:44:29.818183 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3529c73b-8337-5a09-bb85-f9958b3a6115', 'data_vg': 'ceph-3529c73b-8337-5a09-bb85-f9958b3a6115'})  2026-03-07 00:44:29.818188 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5644fa9a-696a-5a4b-ae2f-cbc58e712aba', 'data_vg': 'ceph-5644fa9a-696a-5a4b-ae2f-cbc58e712aba'})  2026-03-07 00:44:29.818196 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:44:29.818201 | orchestrator | 2026-03-07 00:44:29.818205 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-07 00:44:29.818210 | orchestrator | Saturday 07 March 2026 00:44:29 +0000 (0:00:00.229) 0:00:23.153 ******** 2026-03-07 00:44:29.818218 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3529c73b-8337-5a09-bb85-f9958b3a6115', 'data_vg': 'ceph-3529c73b-8337-5a09-bb85-f9958b3a6115'})  2026-03-07 00:44:35.302940 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5644fa9a-696a-5a4b-ae2f-cbc58e712aba', 'data_vg': 'ceph-5644fa9a-696a-5a4b-ae2f-cbc58e712aba'})  2026-03-07 00:44:35.303056 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:44:35.303073 | orchestrator | 2026-03-07 00:44:35.303086 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-07 00:44:35.303098 | orchestrator | Saturday 07 March 2026 00:44:29 +0000 (0:00:00.175) 0:00:23.329 ******** 2026-03-07 00:44:35.303109 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3529c73b-8337-5a09-bb85-f9958b3a6115', 'data_vg': 'ceph-3529c73b-8337-5a09-bb85-f9958b3a6115'})  2026-03-07 00:44:35.303121 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5644fa9a-696a-5a4b-ae2f-cbc58e712aba', 'data_vg': 'ceph-5644fa9a-696a-5a4b-ae2f-cbc58e712aba'})  2026-03-07 00:44:35.303132 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:44:35.303143 | orchestrator | 2026-03-07 00:44:35.303173 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-07 00:44:35.303185 | orchestrator | Saturday 07 March 2026 00:44:29 +0000 (0:00:00.169) 0:00:23.499 ******** 2026-03-07 00:44:35.303196 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3529c73b-8337-5a09-bb85-f9958b3a6115', 'data_vg': 'ceph-3529c73b-8337-5a09-bb85-f9958b3a6115'})  2026-03-07 00:44:35.303208 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5644fa9a-696a-5a4b-ae2f-cbc58e712aba', 'data_vg': 'ceph-5644fa9a-696a-5a4b-ae2f-cbc58e712aba'})  2026-03-07 00:44:35.303219 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:44:35.303230 | orchestrator | 2026-03-07 00:44:35.303241 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-07 00:44:35.303252 | orchestrator | Saturday 07 March 2026 00:44:30 +0000 (0:00:00.200) 0:00:23.699 ******** 2026-03-07 00:44:35.303263 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:44:35.303275 | orchestrator | 2026-03-07 00:44:35.303286 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-07 00:44:35.303297 | orchestrator | Saturday 07 March 2026 00:44:30 +0000 (0:00:00.511) 0:00:24.211 ******** 2026-03-07 00:44:35.303308 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:44:35.303319 | orchestrator | 2026-03-07 00:44:35.303330 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-07 00:44:35.303341 | orchestrator | Saturday 07 March 2026 00:44:31 +0000 (0:00:00.524) 0:00:24.736 ******** 2026-03-07 00:44:35.303352 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:44:35.303363 | orchestrator | 2026-03-07 00:44:35.303374 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-07 00:44:35.303385 | orchestrator | Saturday 07 March 2026 00:44:31 +0000 (0:00:00.162) 0:00:24.898 ******** 2026-03-07 00:44:35.303396 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-3529c73b-8337-5a09-bb85-f9958b3a6115', 'vg_name': 'ceph-3529c73b-8337-5a09-bb85-f9958b3a6115'}) 2026-03-07 00:44:35.303413 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-5644fa9a-696a-5a4b-ae2f-cbc58e712aba', 'vg_name': 'ceph-5644fa9a-696a-5a4b-ae2f-cbc58e712aba'}) 2026-03-07 00:44:35.303424 | orchestrator | 2026-03-07 00:44:35.303435 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-07 00:44:35.303446 | orchestrator | Saturday 07 March 2026 00:44:31 +0000 (0:00:00.165) 0:00:25.063 ******** 2026-03-07 00:44:35.303457 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3529c73b-8337-5a09-bb85-f9958b3a6115', 'data_vg': 'ceph-3529c73b-8337-5a09-bb85-f9958b3a6115'})  2026-03-07 00:44:35.303492 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5644fa9a-696a-5a4b-ae2f-cbc58e712aba', 'data_vg': 'ceph-5644fa9a-696a-5a4b-ae2f-cbc58e712aba'})  2026-03-07 00:44:35.303506 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:44:35.303519 | orchestrator | 2026-03-07 00:44:35.303531 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-07 00:44:35.303544 | orchestrator | Saturday 07 March 2026 00:44:31 +0000 (0:00:00.414) 0:00:25.478 ******** 2026-03-07 00:44:35.303557 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3529c73b-8337-5a09-bb85-f9958b3a6115', 'data_vg': 'ceph-3529c73b-8337-5a09-bb85-f9958b3a6115'})  2026-03-07 00:44:35.303569 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5644fa9a-696a-5a4b-ae2f-cbc58e712aba', 'data_vg': 'ceph-5644fa9a-696a-5a4b-ae2f-cbc58e712aba'})  2026-03-07 00:44:35.303582 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:44:35.303595 | orchestrator | 2026-03-07 00:44:35.303608 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-07 00:44:35.303621 | orchestrator | Saturday 07 March 2026 00:44:32 +0000 (0:00:00.173) 0:00:25.651 ******** 2026-03-07 00:44:35.303634 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3529c73b-8337-5a09-bb85-f9958b3a6115', 'data_vg': 'ceph-3529c73b-8337-5a09-bb85-f9958b3a6115'})  2026-03-07 00:44:35.303646 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5644fa9a-696a-5a4b-ae2f-cbc58e712aba', 'data_vg': 'ceph-5644fa9a-696a-5a4b-ae2f-cbc58e712aba'})  2026-03-07 00:44:35.303659 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:44:35.303704 | orchestrator | 2026-03-07 00:44:35.303718 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-07 00:44:35.303730 | orchestrator | Saturday 07 March 2026 00:44:32 +0000 (0:00:00.175) 0:00:25.827 ******** 2026-03-07 00:44:35.303761 | orchestrator | ok: [testbed-node-3] => { 2026-03-07 00:44:35.303774 | orchestrator |  "lvm_report": { 2026-03-07 00:44:35.303788 | orchestrator |  "lv": [ 2026-03-07 00:44:35.303800 | orchestrator |  { 2026-03-07 00:44:35.303812 | orchestrator |  "lv_name": "osd-block-3529c73b-8337-5a09-bb85-f9958b3a6115", 2026-03-07 00:44:35.303827 | orchestrator |  "vg_name": "ceph-3529c73b-8337-5a09-bb85-f9958b3a6115" 2026-03-07 00:44:35.303840 | orchestrator |  }, 2026-03-07 00:44:35.303852 | orchestrator |  { 2026-03-07 00:44:35.303868 | orchestrator |  "lv_name": "osd-block-5644fa9a-696a-5a4b-ae2f-cbc58e712aba", 2026-03-07 00:44:35.303886 | orchestrator |  "vg_name": "ceph-5644fa9a-696a-5a4b-ae2f-cbc58e712aba" 2026-03-07 00:44:35.303905 | orchestrator |  } 2026-03-07 00:44:35.303922 | orchestrator |  ], 2026-03-07 00:44:35.303940 | orchestrator |  "pv": [ 2026-03-07 00:44:35.303957 | orchestrator |  { 2026-03-07 00:44:35.303976 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-07 00:44:35.303992 | orchestrator |  "vg_name": "ceph-3529c73b-8337-5a09-bb85-f9958b3a6115" 2026-03-07 00:44:35.304009 | orchestrator |  }, 2026-03-07 00:44:35.304027 | orchestrator |  { 2026-03-07 00:44:35.304044 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-07 00:44:35.304063 | orchestrator |  "vg_name": "ceph-5644fa9a-696a-5a4b-ae2f-cbc58e712aba" 2026-03-07 00:44:35.304081 | orchestrator |  } 2026-03-07 00:44:35.304100 | orchestrator |  ] 2026-03-07 00:44:35.304118 | orchestrator |  } 2026-03-07 00:44:35.304138 | orchestrator | } 2026-03-07 00:44:35.304156 | orchestrator | 2026-03-07 00:44:35.304175 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-07 00:44:35.304193 | orchestrator | 2026-03-07 00:44:35.304212 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-07 00:44:35.304231 | orchestrator | Saturday 07 March 2026 00:44:32 +0000 (0:00:00.294) 0:00:26.122 ******** 2026-03-07 00:44:35.304256 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-07 00:44:35.304267 | orchestrator | 2026-03-07 00:44:35.304278 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-07 00:44:35.304289 | orchestrator | Saturday 07 March 2026 00:44:32 +0000 (0:00:00.278) 0:00:26.400 ******** 2026-03-07 00:44:35.304300 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:44:35.304311 | orchestrator | 2026-03-07 00:44:35.304322 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:44:35.304333 | orchestrator | Saturday 07 March 2026 00:44:33 +0000 (0:00:00.240) 0:00:26.640 ******** 2026-03-07 00:44:35.304343 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-07 00:44:35.304354 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-07 00:44:35.304365 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-07 00:44:35.304375 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-07 00:44:35.304386 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-07 00:44:35.304417 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-07 00:44:35.304439 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-07 00:44:35.304457 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-07 00:44:35.304469 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-07 00:44:35.304480 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-07 00:44:35.304491 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-07 00:44:35.304501 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-07 00:44:35.304512 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-07 00:44:35.304523 | orchestrator | 2026-03-07 00:44:35.304534 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:44:35.304545 | orchestrator | Saturday 07 March 2026 00:44:33 +0000 (0:00:00.403) 0:00:27.044 ******** 2026-03-07 00:44:35.304555 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:35.304566 | orchestrator | 2026-03-07 00:44:35.304577 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:44:35.304588 | orchestrator | Saturday 07 March 2026 00:44:33 +0000 (0:00:00.215) 0:00:27.260 ******** 2026-03-07 00:44:35.304599 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:35.304610 | orchestrator | 2026-03-07 00:44:35.304620 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:44:35.304631 | orchestrator | Saturday 07 March 2026 00:44:33 +0000 (0:00:00.204) 0:00:27.465 ******** 2026-03-07 00:44:35.304642 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:35.304653 | orchestrator | 2026-03-07 00:44:35.304664 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:44:35.304718 | orchestrator | Saturday 07 March 2026 00:44:34 +0000 (0:00:00.702) 0:00:28.167 ******** 2026-03-07 00:44:35.304729 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:35.304740 | orchestrator | 2026-03-07 00:44:35.304751 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:44:35.304762 | orchestrator | Saturday 07 March 2026 00:44:34 +0000 (0:00:00.222) 0:00:28.390 ******** 2026-03-07 00:44:35.304773 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:35.304784 | orchestrator | 2026-03-07 00:44:35.304794 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:44:35.304806 | orchestrator | Saturday 07 March 2026 00:44:35 +0000 (0:00:00.216) 0:00:28.607 ******** 2026-03-07 00:44:35.304824 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:35.304835 | orchestrator | 2026-03-07 00:44:35.304858 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:44:47.119079 | orchestrator | Saturday 07 March 2026 00:44:35 +0000 (0:00:00.205) 0:00:28.813 ******** 2026-03-07 00:44:47.119180 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:47.119194 | orchestrator | 2026-03-07 00:44:47.119202 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:44:47.119210 | orchestrator | Saturday 07 March 2026 00:44:35 +0000 (0:00:00.207) 0:00:29.020 ******** 2026-03-07 00:44:47.119218 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:47.119225 | orchestrator | 2026-03-07 00:44:47.119233 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:44:47.119240 | orchestrator | Saturday 07 March 2026 00:44:35 +0000 (0:00:00.206) 0:00:29.226 ******** 2026-03-07 00:44:47.119247 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_e807b00a-8b7b-48ce-9460-0e3636b06250) 2026-03-07 00:44:47.119257 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_e807b00a-8b7b-48ce-9460-0e3636b06250) 2026-03-07 00:44:47.119264 | orchestrator | 2026-03-07 00:44:47.119271 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:44:47.119279 | orchestrator | Saturday 07 March 2026 00:44:36 +0000 (0:00:00.442) 0:00:29.668 ******** 2026-03-07 00:44:47.119286 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_af4ba259-cb6f-4fcf-8c2a-944dae969065) 2026-03-07 00:44:47.119294 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_af4ba259-cb6f-4fcf-8c2a-944dae969065) 2026-03-07 00:44:47.119304 | orchestrator | 2026-03-07 00:44:47.119313 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:44:47.119333 | orchestrator | Saturday 07 March 2026 00:44:36 +0000 (0:00:00.466) 0:00:30.135 ******** 2026-03-07 00:44:47.119346 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_3c3014e6-40a5-4340-97e1-b63d744f1dc5) 2026-03-07 00:44:47.119353 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_3c3014e6-40a5-4340-97e1-b63d744f1dc5) 2026-03-07 00:44:47.119360 | orchestrator | 2026-03-07 00:44:47.119366 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:44:47.119373 | orchestrator | Saturday 07 March 2026 00:44:37 +0000 (0:00:00.495) 0:00:30.630 ******** 2026-03-07 00:44:47.119380 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_4fdacbb0-1c31-482c-97c6-063a331da0fc) 2026-03-07 00:44:47.119387 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_4fdacbb0-1c31-482c-97c6-063a331da0fc) 2026-03-07 00:44:47.119394 | orchestrator | 2026-03-07 00:44:47.119401 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:44:47.119408 | orchestrator | Saturday 07 March 2026 00:44:37 +0000 (0:00:00.709) 0:00:31.340 ******** 2026-03-07 00:44:47.119416 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-07 00:44:47.119424 | orchestrator | 2026-03-07 00:44:47.119432 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:44:47.119439 | orchestrator | Saturday 07 March 2026 00:44:38 +0000 (0:00:00.618) 0:00:31.959 ******** 2026-03-07 00:44:47.119448 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-07 00:44:47.119457 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-07 00:44:47.119466 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-07 00:44:47.119474 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-07 00:44:47.119482 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-07 00:44:47.119490 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-07 00:44:47.119523 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-07 00:44:47.119530 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-07 00:44:47.119537 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-07 00:44:47.119544 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-07 00:44:47.119550 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-07 00:44:47.119557 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-07 00:44:47.119564 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-07 00:44:47.119571 | orchestrator | 2026-03-07 00:44:47.119578 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:44:47.119585 | orchestrator | Saturday 07 March 2026 00:44:39 +0000 (0:00:00.959) 0:00:32.919 ******** 2026-03-07 00:44:47.119592 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:47.119598 | orchestrator | 2026-03-07 00:44:47.119605 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:44:47.119630 | orchestrator | Saturday 07 March 2026 00:44:39 +0000 (0:00:00.205) 0:00:33.124 ******** 2026-03-07 00:44:47.119638 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:47.119645 | orchestrator | 2026-03-07 00:44:47.119652 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:44:47.119659 | orchestrator | Saturday 07 March 2026 00:44:39 +0000 (0:00:00.245) 0:00:33.369 ******** 2026-03-07 00:44:47.119666 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:47.119672 | orchestrator | 2026-03-07 00:44:47.119695 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:44:47.119702 | orchestrator | Saturday 07 March 2026 00:44:40 +0000 (0:00:00.211) 0:00:33.581 ******** 2026-03-07 00:44:47.119709 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:47.119716 | orchestrator | 2026-03-07 00:44:47.119723 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:44:47.119730 | orchestrator | Saturday 07 March 2026 00:44:40 +0000 (0:00:00.212) 0:00:33.793 ******** 2026-03-07 00:44:47.119759 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:47.119767 | orchestrator | 2026-03-07 00:44:47.119774 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:44:47.119781 | orchestrator | Saturday 07 March 2026 00:44:40 +0000 (0:00:00.218) 0:00:34.012 ******** 2026-03-07 00:44:47.119787 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:47.119794 | orchestrator | 2026-03-07 00:44:47.119801 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:44:47.119808 | orchestrator | Saturday 07 March 2026 00:44:40 +0000 (0:00:00.193) 0:00:34.206 ******** 2026-03-07 00:44:47.119814 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:47.119821 | orchestrator | 2026-03-07 00:44:47.119827 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:44:47.119834 | orchestrator | Saturday 07 March 2026 00:44:40 +0000 (0:00:00.206) 0:00:34.412 ******** 2026-03-07 00:44:47.119841 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:47.119847 | orchestrator | 2026-03-07 00:44:47.119854 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:44:47.119860 | orchestrator | Saturday 07 March 2026 00:44:41 +0000 (0:00:00.233) 0:00:34.646 ******** 2026-03-07 00:44:47.119867 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-07 00:44:47.119874 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-07 00:44:47.119882 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-07 00:44:47.119890 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-07 00:44:47.119898 | orchestrator | 2026-03-07 00:44:47.119906 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:44:47.119922 | orchestrator | Saturday 07 March 2026 00:44:42 +0000 (0:00:00.893) 0:00:35.539 ******** 2026-03-07 00:44:47.119931 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:47.119939 | orchestrator | 2026-03-07 00:44:47.119947 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:44:47.119955 | orchestrator | Saturday 07 March 2026 00:44:42 +0000 (0:00:00.203) 0:00:35.743 ******** 2026-03-07 00:44:47.119963 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:47.119971 | orchestrator | 2026-03-07 00:44:47.119981 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:44:47.119990 | orchestrator | Saturday 07 March 2026 00:44:42 +0000 (0:00:00.681) 0:00:36.425 ******** 2026-03-07 00:44:47.119997 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:47.120005 | orchestrator | 2026-03-07 00:44:47.120013 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:44:47.120021 | orchestrator | Saturday 07 March 2026 00:44:43 +0000 (0:00:00.210) 0:00:36.635 ******** 2026-03-07 00:44:47.120028 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:47.120036 | orchestrator | 2026-03-07 00:44:47.120045 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-07 00:44:47.120057 | orchestrator | Saturday 07 March 2026 00:44:43 +0000 (0:00:00.206) 0:00:36.842 ******** 2026-03-07 00:44:47.120063 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:47.120070 | orchestrator | 2026-03-07 00:44:47.120076 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-07 00:44:47.120083 | orchestrator | Saturday 07 March 2026 00:44:43 +0000 (0:00:00.125) 0:00:36.968 ******** 2026-03-07 00:44:47.120089 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '030f8481-3d62-5800-8c17-c22bf68268ab'}}) 2026-03-07 00:44:47.120096 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8595c920-fb8d-5336-8a83-206e7467f719'}}) 2026-03-07 00:44:47.120103 | orchestrator | 2026-03-07 00:44:47.120109 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-07 00:44:47.120116 | orchestrator | Saturday 07 March 2026 00:44:43 +0000 (0:00:00.196) 0:00:37.165 ******** 2026-03-07 00:44:47.120126 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-030f8481-3d62-5800-8c17-c22bf68268ab', 'data_vg': 'ceph-030f8481-3d62-5800-8c17-c22bf68268ab'}) 2026-03-07 00:44:47.120135 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-8595c920-fb8d-5336-8a83-206e7467f719', 'data_vg': 'ceph-8595c920-fb8d-5336-8a83-206e7467f719'}) 2026-03-07 00:44:47.120143 | orchestrator | 2026-03-07 00:44:47.120151 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-07 00:44:47.120158 | orchestrator | Saturday 07 March 2026 00:44:45 +0000 (0:00:01.921) 0:00:39.086 ******** 2026-03-07 00:44:47.120166 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-030f8481-3d62-5800-8c17-c22bf68268ab', 'data_vg': 'ceph-030f8481-3d62-5800-8c17-c22bf68268ab'})  2026-03-07 00:44:47.120176 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8595c920-fb8d-5336-8a83-206e7467f719', 'data_vg': 'ceph-8595c920-fb8d-5336-8a83-206e7467f719'})  2026-03-07 00:44:47.120184 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:47.120191 | orchestrator | 2026-03-07 00:44:47.120199 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-07 00:44:47.120207 | orchestrator | Saturday 07 March 2026 00:44:45 +0000 (0:00:00.159) 0:00:39.246 ******** 2026-03-07 00:44:47.120215 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-030f8481-3d62-5800-8c17-c22bf68268ab', 'data_vg': 'ceph-030f8481-3d62-5800-8c17-c22bf68268ab'}) 2026-03-07 00:44:47.120230 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-8595c920-fb8d-5336-8a83-206e7467f719', 'data_vg': 'ceph-8595c920-fb8d-5336-8a83-206e7467f719'}) 2026-03-07 00:44:53.110455 | orchestrator | 2026-03-07 00:44:53.110574 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-07 00:44:53.110618 | orchestrator | Saturday 07 March 2026 00:44:47 +0000 (0:00:01.378) 0:00:40.625 ******** 2026-03-07 00:44:53.110632 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-030f8481-3d62-5800-8c17-c22bf68268ab', 'data_vg': 'ceph-030f8481-3d62-5800-8c17-c22bf68268ab'})  2026-03-07 00:44:53.110645 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8595c920-fb8d-5336-8a83-206e7467f719', 'data_vg': 'ceph-8595c920-fb8d-5336-8a83-206e7467f719'})  2026-03-07 00:44:53.110658 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:53.110671 | orchestrator | 2026-03-07 00:44:53.110685 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-07 00:44:53.110697 | orchestrator | Saturday 07 March 2026 00:44:47 +0000 (0:00:00.162) 0:00:40.787 ******** 2026-03-07 00:44:53.110709 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:53.110722 | orchestrator | 2026-03-07 00:44:53.110736 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-07 00:44:53.110748 | orchestrator | Saturday 07 March 2026 00:44:47 +0000 (0:00:00.148) 0:00:40.936 ******** 2026-03-07 00:44:53.110761 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-030f8481-3d62-5800-8c17-c22bf68268ab', 'data_vg': 'ceph-030f8481-3d62-5800-8c17-c22bf68268ab'})  2026-03-07 00:44:53.110819 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8595c920-fb8d-5336-8a83-206e7467f719', 'data_vg': 'ceph-8595c920-fb8d-5336-8a83-206e7467f719'})  2026-03-07 00:44:53.110833 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:53.110846 | orchestrator | 2026-03-07 00:44:53.110858 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-07 00:44:53.110869 | orchestrator | Saturday 07 March 2026 00:44:47 +0000 (0:00:00.163) 0:00:41.099 ******** 2026-03-07 00:44:53.110880 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:53.110891 | orchestrator | 2026-03-07 00:44:53.110902 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-07 00:44:53.110913 | orchestrator | Saturday 07 March 2026 00:44:47 +0000 (0:00:00.141) 0:00:41.240 ******** 2026-03-07 00:44:53.110925 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-030f8481-3d62-5800-8c17-c22bf68268ab', 'data_vg': 'ceph-030f8481-3d62-5800-8c17-c22bf68268ab'})  2026-03-07 00:44:53.110936 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8595c920-fb8d-5336-8a83-206e7467f719', 'data_vg': 'ceph-8595c920-fb8d-5336-8a83-206e7467f719'})  2026-03-07 00:44:53.110947 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:53.110972 | orchestrator | 2026-03-07 00:44:53.110995 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-07 00:44:53.111024 | orchestrator | Saturday 07 March 2026 00:44:48 +0000 (0:00:00.388) 0:00:41.629 ******** 2026-03-07 00:44:53.111038 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:53.111048 | orchestrator | 2026-03-07 00:44:53.111056 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-07 00:44:53.111065 | orchestrator | Saturday 07 March 2026 00:44:48 +0000 (0:00:00.146) 0:00:41.776 ******** 2026-03-07 00:44:53.111074 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-030f8481-3d62-5800-8c17-c22bf68268ab', 'data_vg': 'ceph-030f8481-3d62-5800-8c17-c22bf68268ab'})  2026-03-07 00:44:53.111082 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8595c920-fb8d-5336-8a83-206e7467f719', 'data_vg': 'ceph-8595c920-fb8d-5336-8a83-206e7467f719'})  2026-03-07 00:44:53.111090 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:53.111098 | orchestrator | 2026-03-07 00:44:53.111109 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-07 00:44:53.111121 | orchestrator | Saturday 07 March 2026 00:44:48 +0000 (0:00:00.181) 0:00:41.957 ******** 2026-03-07 00:44:53.111134 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:44:53.111148 | orchestrator | 2026-03-07 00:44:53.111161 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-07 00:44:53.111184 | orchestrator | Saturday 07 March 2026 00:44:48 +0000 (0:00:00.146) 0:00:42.104 ******** 2026-03-07 00:44:53.111193 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-030f8481-3d62-5800-8c17-c22bf68268ab', 'data_vg': 'ceph-030f8481-3d62-5800-8c17-c22bf68268ab'})  2026-03-07 00:44:53.111201 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8595c920-fb8d-5336-8a83-206e7467f719', 'data_vg': 'ceph-8595c920-fb8d-5336-8a83-206e7467f719'})  2026-03-07 00:44:53.111210 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:53.111218 | orchestrator | 2026-03-07 00:44:53.111226 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-07 00:44:53.111235 | orchestrator | Saturday 07 March 2026 00:44:48 +0000 (0:00:00.150) 0:00:42.255 ******** 2026-03-07 00:44:53.111244 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-030f8481-3d62-5800-8c17-c22bf68268ab', 'data_vg': 'ceph-030f8481-3d62-5800-8c17-c22bf68268ab'})  2026-03-07 00:44:53.111252 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8595c920-fb8d-5336-8a83-206e7467f719', 'data_vg': 'ceph-8595c920-fb8d-5336-8a83-206e7467f719'})  2026-03-07 00:44:53.111260 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:53.111268 | orchestrator | 2026-03-07 00:44:53.111277 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-07 00:44:53.111304 | orchestrator | Saturday 07 March 2026 00:44:48 +0000 (0:00:00.155) 0:00:42.410 ******** 2026-03-07 00:44:53.111313 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-030f8481-3d62-5800-8c17-c22bf68268ab', 'data_vg': 'ceph-030f8481-3d62-5800-8c17-c22bf68268ab'})  2026-03-07 00:44:53.111322 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8595c920-fb8d-5336-8a83-206e7467f719', 'data_vg': 'ceph-8595c920-fb8d-5336-8a83-206e7467f719'})  2026-03-07 00:44:53.111330 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:53.111338 | orchestrator | 2026-03-07 00:44:53.111347 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-07 00:44:53.111355 | orchestrator | Saturday 07 March 2026 00:44:49 +0000 (0:00:00.165) 0:00:42.576 ******** 2026-03-07 00:44:53.111362 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:53.111370 | orchestrator | 2026-03-07 00:44:53.111377 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-07 00:44:53.111384 | orchestrator | Saturday 07 March 2026 00:44:49 +0000 (0:00:00.169) 0:00:42.746 ******** 2026-03-07 00:44:53.111392 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:53.111399 | orchestrator | 2026-03-07 00:44:53.111406 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-07 00:44:53.111413 | orchestrator | Saturday 07 March 2026 00:44:49 +0000 (0:00:00.148) 0:00:42.894 ******** 2026-03-07 00:44:53.111421 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:53.111428 | orchestrator | 2026-03-07 00:44:53.111435 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-07 00:44:53.111442 | orchestrator | Saturday 07 March 2026 00:44:49 +0000 (0:00:00.155) 0:00:43.050 ******** 2026-03-07 00:44:53.111449 | orchestrator | ok: [testbed-node-4] => { 2026-03-07 00:44:53.111457 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-07 00:44:53.111464 | orchestrator | } 2026-03-07 00:44:53.111472 | orchestrator | 2026-03-07 00:44:53.111479 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-07 00:44:53.111486 | orchestrator | Saturday 07 March 2026 00:44:49 +0000 (0:00:00.148) 0:00:43.199 ******** 2026-03-07 00:44:53.111493 | orchestrator | ok: [testbed-node-4] => { 2026-03-07 00:44:53.111500 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-07 00:44:53.111508 | orchestrator | } 2026-03-07 00:44:53.111515 | orchestrator | 2026-03-07 00:44:53.111523 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-07 00:44:53.111530 | orchestrator | Saturday 07 March 2026 00:44:49 +0000 (0:00:00.160) 0:00:43.360 ******** 2026-03-07 00:44:53.111542 | orchestrator | ok: [testbed-node-4] => { 2026-03-07 00:44:53.111550 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-07 00:44:53.111557 | orchestrator | } 2026-03-07 00:44:53.111564 | orchestrator | 2026-03-07 00:44:53.111572 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-07 00:44:53.111579 | orchestrator | Saturday 07 March 2026 00:44:50 +0000 (0:00:00.366) 0:00:43.726 ******** 2026-03-07 00:44:53.111586 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:44:53.111594 | orchestrator | 2026-03-07 00:44:53.111601 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-07 00:44:53.111609 | orchestrator | Saturday 07 March 2026 00:44:50 +0000 (0:00:00.657) 0:00:44.384 ******** 2026-03-07 00:44:53.111616 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:44:53.111623 | orchestrator | 2026-03-07 00:44:53.111630 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-07 00:44:53.111643 | orchestrator | Saturday 07 March 2026 00:44:51 +0000 (0:00:00.546) 0:00:44.931 ******** 2026-03-07 00:44:53.111654 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:44:53.111664 | orchestrator | 2026-03-07 00:44:53.111682 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-07 00:44:53.111696 | orchestrator | Saturday 07 March 2026 00:44:51 +0000 (0:00:00.561) 0:00:45.492 ******** 2026-03-07 00:44:53.111706 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:44:53.111716 | orchestrator | 2026-03-07 00:44:53.111726 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-07 00:44:53.111737 | orchestrator | Saturday 07 March 2026 00:44:52 +0000 (0:00:00.150) 0:00:45.643 ******** 2026-03-07 00:44:53.111747 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:53.111759 | orchestrator | 2026-03-07 00:44:53.111770 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-07 00:44:53.111805 | orchestrator | Saturday 07 March 2026 00:44:52 +0000 (0:00:00.099) 0:00:45.742 ******** 2026-03-07 00:44:53.111816 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:53.111828 | orchestrator | 2026-03-07 00:44:53.111839 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-07 00:44:53.111850 | orchestrator | Saturday 07 March 2026 00:44:52 +0000 (0:00:00.120) 0:00:45.863 ******** 2026-03-07 00:44:53.111861 | orchestrator | ok: [testbed-node-4] => { 2026-03-07 00:44:53.111872 | orchestrator |  "vgs_report": { 2026-03-07 00:44:53.111884 | orchestrator |  "vg": [] 2026-03-07 00:44:53.111896 | orchestrator |  } 2026-03-07 00:44:53.111908 | orchestrator | } 2026-03-07 00:44:53.111920 | orchestrator | 2026-03-07 00:44:53.111932 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-07 00:44:53.111944 | orchestrator | Saturday 07 March 2026 00:44:52 +0000 (0:00:00.162) 0:00:46.026 ******** 2026-03-07 00:44:53.111955 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:53.111967 | orchestrator | 2026-03-07 00:44:53.111979 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-07 00:44:53.111991 | orchestrator | Saturday 07 March 2026 00:44:52 +0000 (0:00:00.158) 0:00:46.185 ******** 2026-03-07 00:44:53.112003 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:53.112016 | orchestrator | 2026-03-07 00:44:53.112025 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-07 00:44:53.112033 | orchestrator | Saturday 07 March 2026 00:44:52 +0000 (0:00:00.148) 0:00:46.334 ******** 2026-03-07 00:44:53.112040 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:53.112047 | orchestrator | 2026-03-07 00:44:53.112054 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-07 00:44:53.112072 | orchestrator | Saturday 07 March 2026 00:44:52 +0000 (0:00:00.140) 0:00:46.474 ******** 2026-03-07 00:44:53.112080 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:53.112087 | orchestrator | 2026-03-07 00:44:53.112103 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-07 00:44:58.411198 | orchestrator | Saturday 07 March 2026 00:44:53 +0000 (0:00:00.143) 0:00:46.617 ******** 2026-03-07 00:44:58.411304 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:58.411315 | orchestrator | 2026-03-07 00:44:58.411320 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-07 00:44:58.411325 | orchestrator | Saturday 07 March 2026 00:44:53 +0000 (0:00:00.355) 0:00:46.973 ******** 2026-03-07 00:44:58.411329 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:58.411332 | orchestrator | 2026-03-07 00:44:58.411336 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-07 00:44:58.411341 | orchestrator | Saturday 07 March 2026 00:44:53 +0000 (0:00:00.139) 0:00:47.112 ******** 2026-03-07 00:44:58.411345 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:58.411348 | orchestrator | 2026-03-07 00:44:58.411352 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-07 00:44:58.411356 | orchestrator | Saturday 07 March 2026 00:44:53 +0000 (0:00:00.142) 0:00:47.255 ******** 2026-03-07 00:44:58.411360 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:58.411364 | orchestrator | 2026-03-07 00:44:58.411368 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-07 00:44:58.411371 | orchestrator | Saturday 07 March 2026 00:44:53 +0000 (0:00:00.148) 0:00:47.403 ******** 2026-03-07 00:44:58.411375 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:58.411379 | orchestrator | 2026-03-07 00:44:58.411383 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-07 00:44:58.411387 | orchestrator | Saturday 07 March 2026 00:44:54 +0000 (0:00:00.132) 0:00:47.536 ******** 2026-03-07 00:44:58.411391 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:58.411402 | orchestrator | 2026-03-07 00:44:58.411407 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-07 00:44:58.411410 | orchestrator | Saturday 07 March 2026 00:44:54 +0000 (0:00:00.142) 0:00:47.678 ******** 2026-03-07 00:44:58.411414 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:58.411418 | orchestrator | 2026-03-07 00:44:58.411422 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-07 00:44:58.411427 | orchestrator | Saturday 07 March 2026 00:44:54 +0000 (0:00:00.149) 0:00:47.828 ******** 2026-03-07 00:44:58.411433 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:58.411439 | orchestrator | 2026-03-07 00:44:58.411445 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-07 00:44:58.411451 | orchestrator | Saturday 07 March 2026 00:44:54 +0000 (0:00:00.186) 0:00:48.014 ******** 2026-03-07 00:44:58.411457 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:58.411464 | orchestrator | 2026-03-07 00:44:58.411468 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-07 00:44:58.411472 | orchestrator | Saturday 07 March 2026 00:44:54 +0000 (0:00:00.146) 0:00:48.161 ******** 2026-03-07 00:44:58.411475 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:58.411479 | orchestrator | 2026-03-07 00:44:58.411483 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-07 00:44:58.411519 | orchestrator | Saturday 07 March 2026 00:44:54 +0000 (0:00:00.156) 0:00:48.317 ******** 2026-03-07 00:44:58.411525 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-030f8481-3d62-5800-8c17-c22bf68268ab', 'data_vg': 'ceph-030f8481-3d62-5800-8c17-c22bf68268ab'})  2026-03-07 00:44:58.411531 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8595c920-fb8d-5336-8a83-206e7467f719', 'data_vg': 'ceph-8595c920-fb8d-5336-8a83-206e7467f719'})  2026-03-07 00:44:58.411535 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:58.411538 | orchestrator | 2026-03-07 00:44:58.411542 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-07 00:44:58.411546 | orchestrator | Saturday 07 March 2026 00:44:54 +0000 (0:00:00.143) 0:00:48.461 ******** 2026-03-07 00:44:58.411550 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-030f8481-3d62-5800-8c17-c22bf68268ab', 'data_vg': 'ceph-030f8481-3d62-5800-8c17-c22bf68268ab'})  2026-03-07 00:44:58.411558 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8595c920-fb8d-5336-8a83-206e7467f719', 'data_vg': 'ceph-8595c920-fb8d-5336-8a83-206e7467f719'})  2026-03-07 00:44:58.411562 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:58.411565 | orchestrator | 2026-03-07 00:44:58.411569 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-07 00:44:58.411573 | orchestrator | Saturday 07 March 2026 00:44:55 +0000 (0:00:00.152) 0:00:48.613 ******** 2026-03-07 00:44:58.411606 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-030f8481-3d62-5800-8c17-c22bf68268ab', 'data_vg': 'ceph-030f8481-3d62-5800-8c17-c22bf68268ab'})  2026-03-07 00:44:58.411664 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8595c920-fb8d-5336-8a83-206e7467f719', 'data_vg': 'ceph-8595c920-fb8d-5336-8a83-206e7467f719'})  2026-03-07 00:44:58.411670 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:58.411676 | orchestrator | 2026-03-07 00:44:58.411682 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-07 00:44:58.411689 | orchestrator | Saturday 07 March 2026 00:44:55 +0000 (0:00:00.423) 0:00:49.037 ******** 2026-03-07 00:44:58.411696 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-030f8481-3d62-5800-8c17-c22bf68268ab', 'data_vg': 'ceph-030f8481-3d62-5800-8c17-c22bf68268ab'})  2026-03-07 00:44:58.411703 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8595c920-fb8d-5336-8a83-206e7467f719', 'data_vg': 'ceph-8595c920-fb8d-5336-8a83-206e7467f719'})  2026-03-07 00:44:58.411727 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:58.411734 | orchestrator | 2026-03-07 00:44:58.411756 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-07 00:44:58.411761 | orchestrator | Saturday 07 March 2026 00:44:55 +0000 (0:00:00.185) 0:00:49.222 ******** 2026-03-07 00:44:58.411765 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-030f8481-3d62-5800-8c17-c22bf68268ab', 'data_vg': 'ceph-030f8481-3d62-5800-8c17-c22bf68268ab'})  2026-03-07 00:44:58.411770 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8595c920-fb8d-5336-8a83-206e7467f719', 'data_vg': 'ceph-8595c920-fb8d-5336-8a83-206e7467f719'})  2026-03-07 00:44:58.411774 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:58.411778 | orchestrator | 2026-03-07 00:44:58.411786 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-07 00:44:58.411793 | orchestrator | Saturday 07 March 2026 00:44:55 +0000 (0:00:00.212) 0:00:49.434 ******** 2026-03-07 00:44:58.412057 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-030f8481-3d62-5800-8c17-c22bf68268ab', 'data_vg': 'ceph-030f8481-3d62-5800-8c17-c22bf68268ab'})  2026-03-07 00:44:58.412068 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8595c920-fb8d-5336-8a83-206e7467f719', 'data_vg': 'ceph-8595c920-fb8d-5336-8a83-206e7467f719'})  2026-03-07 00:44:58.412073 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:58.412077 | orchestrator | 2026-03-07 00:44:58.412082 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-07 00:44:58.412086 | orchestrator | Saturday 07 March 2026 00:44:56 +0000 (0:00:00.178) 0:00:49.613 ******** 2026-03-07 00:44:58.412091 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-030f8481-3d62-5800-8c17-c22bf68268ab', 'data_vg': 'ceph-030f8481-3d62-5800-8c17-c22bf68268ab'})  2026-03-07 00:44:58.412095 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8595c920-fb8d-5336-8a83-206e7467f719', 'data_vg': 'ceph-8595c920-fb8d-5336-8a83-206e7467f719'})  2026-03-07 00:44:58.412098 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:58.412102 | orchestrator | 2026-03-07 00:44:58.412106 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-07 00:44:58.412110 | orchestrator | Saturday 07 March 2026 00:44:56 +0000 (0:00:00.184) 0:00:49.797 ******** 2026-03-07 00:44:58.412113 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-030f8481-3d62-5800-8c17-c22bf68268ab', 'data_vg': 'ceph-030f8481-3d62-5800-8c17-c22bf68268ab'})  2026-03-07 00:44:58.412123 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8595c920-fb8d-5336-8a83-206e7467f719', 'data_vg': 'ceph-8595c920-fb8d-5336-8a83-206e7467f719'})  2026-03-07 00:44:58.412131 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:58.412135 | orchestrator | 2026-03-07 00:44:58.412139 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-07 00:44:58.412143 | orchestrator | Saturday 07 March 2026 00:44:56 +0000 (0:00:00.149) 0:00:49.947 ******** 2026-03-07 00:44:58.412147 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:44:58.412151 | orchestrator | 2026-03-07 00:44:58.412155 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-07 00:44:58.412159 | orchestrator | Saturday 07 March 2026 00:44:56 +0000 (0:00:00.541) 0:00:50.488 ******** 2026-03-07 00:44:58.412162 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:44:58.412166 | orchestrator | 2026-03-07 00:44:58.412199 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-07 00:44:58.412204 | orchestrator | Saturday 07 March 2026 00:44:57 +0000 (0:00:00.653) 0:00:51.142 ******** 2026-03-07 00:44:58.412208 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:44:58.412211 | orchestrator | 2026-03-07 00:44:58.412215 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-07 00:44:58.412219 | orchestrator | Saturday 07 March 2026 00:44:57 +0000 (0:00:00.167) 0:00:51.310 ******** 2026-03-07 00:44:58.412223 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-030f8481-3d62-5800-8c17-c22bf68268ab', 'vg_name': 'ceph-030f8481-3d62-5800-8c17-c22bf68268ab'}) 2026-03-07 00:44:58.412228 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-8595c920-fb8d-5336-8a83-206e7467f719', 'vg_name': 'ceph-8595c920-fb8d-5336-8a83-206e7467f719'}) 2026-03-07 00:44:58.412231 | orchestrator | 2026-03-07 00:44:58.412235 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-07 00:44:58.412239 | orchestrator | Saturday 07 March 2026 00:44:58 +0000 (0:00:00.216) 0:00:51.526 ******** 2026-03-07 00:44:58.412243 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-030f8481-3d62-5800-8c17-c22bf68268ab', 'data_vg': 'ceph-030f8481-3d62-5800-8c17-c22bf68268ab'})  2026-03-07 00:44:58.412246 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8595c920-fb8d-5336-8a83-206e7467f719', 'data_vg': 'ceph-8595c920-fb8d-5336-8a83-206e7467f719'})  2026-03-07 00:44:58.412250 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:44:58.412254 | orchestrator | 2026-03-07 00:44:58.412258 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-07 00:44:58.412262 | orchestrator | Saturday 07 March 2026 00:44:58 +0000 (0:00:00.209) 0:00:51.736 ******** 2026-03-07 00:44:58.412265 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-030f8481-3d62-5800-8c17-c22bf68268ab', 'data_vg': 'ceph-030f8481-3d62-5800-8c17-c22bf68268ab'})  2026-03-07 00:44:58.412276 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8595c920-fb8d-5336-8a83-206e7467f719', 'data_vg': 'ceph-8595c920-fb8d-5336-8a83-206e7467f719'})  2026-03-07 00:45:04.652347 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:45:04.652495 | orchestrator | 2026-03-07 00:45:04.652526 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-07 00:45:04.652549 | orchestrator | Saturday 07 March 2026 00:44:58 +0000 (0:00:00.186) 0:00:51.922 ******** 2026-03-07 00:45:04.652570 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-030f8481-3d62-5800-8c17-c22bf68268ab', 'data_vg': 'ceph-030f8481-3d62-5800-8c17-c22bf68268ab'})  2026-03-07 00:45:04.652591 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8595c920-fb8d-5336-8a83-206e7467f719', 'data_vg': 'ceph-8595c920-fb8d-5336-8a83-206e7467f719'})  2026-03-07 00:45:04.652606 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:45:04.652617 | orchestrator | 2026-03-07 00:45:04.652629 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-07 00:45:04.652666 | orchestrator | Saturday 07 March 2026 00:44:58 +0000 (0:00:00.172) 0:00:52.095 ******** 2026-03-07 00:45:04.652679 | orchestrator | ok: [testbed-node-4] => { 2026-03-07 00:45:04.652690 | orchestrator |  "lvm_report": { 2026-03-07 00:45:04.652702 | orchestrator |  "lv": [ 2026-03-07 00:45:04.652714 | orchestrator |  { 2026-03-07 00:45:04.652725 | orchestrator |  "lv_name": "osd-block-030f8481-3d62-5800-8c17-c22bf68268ab", 2026-03-07 00:45:04.652737 | orchestrator |  "vg_name": "ceph-030f8481-3d62-5800-8c17-c22bf68268ab" 2026-03-07 00:45:04.652748 | orchestrator |  }, 2026-03-07 00:45:04.652759 | orchestrator |  { 2026-03-07 00:45:04.652770 | orchestrator |  "lv_name": "osd-block-8595c920-fb8d-5336-8a83-206e7467f719", 2026-03-07 00:45:04.652781 | orchestrator |  "vg_name": "ceph-8595c920-fb8d-5336-8a83-206e7467f719" 2026-03-07 00:45:04.652791 | orchestrator |  } 2026-03-07 00:45:04.652802 | orchestrator |  ], 2026-03-07 00:45:04.652813 | orchestrator |  "pv": [ 2026-03-07 00:45:04.652824 | orchestrator |  { 2026-03-07 00:45:04.652835 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-07 00:45:04.652895 | orchestrator |  "vg_name": "ceph-030f8481-3d62-5800-8c17-c22bf68268ab" 2026-03-07 00:45:04.652924 | orchestrator |  }, 2026-03-07 00:45:04.652944 | orchestrator |  { 2026-03-07 00:45:04.652962 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-07 00:45:04.652982 | orchestrator |  "vg_name": "ceph-8595c920-fb8d-5336-8a83-206e7467f719" 2026-03-07 00:45:04.653001 | orchestrator |  } 2026-03-07 00:45:04.653021 | orchestrator |  ] 2026-03-07 00:45:04.653041 | orchestrator |  } 2026-03-07 00:45:04.653062 | orchestrator | } 2026-03-07 00:45:04.653082 | orchestrator | 2026-03-07 00:45:04.653095 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-07 00:45:04.653106 | orchestrator | 2026-03-07 00:45:04.653117 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-07 00:45:04.653128 | orchestrator | Saturday 07 March 2026 00:44:59 +0000 (0:00:00.522) 0:00:52.617 ******** 2026-03-07 00:45:04.653139 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-07 00:45:04.653151 | orchestrator | 2026-03-07 00:45:04.653161 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-07 00:45:04.653173 | orchestrator | Saturday 07 March 2026 00:44:59 +0000 (0:00:00.255) 0:00:52.873 ******** 2026-03-07 00:45:04.653184 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:45:04.653195 | orchestrator | 2026-03-07 00:45:04.653206 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:45:04.653217 | orchestrator | Saturday 07 March 2026 00:44:59 +0000 (0:00:00.246) 0:00:53.119 ******** 2026-03-07 00:45:04.653227 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-07 00:45:04.653238 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-07 00:45:04.653249 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-07 00:45:04.653259 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-07 00:45:04.653270 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-07 00:45:04.653281 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-07 00:45:04.653291 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-07 00:45:04.653302 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-07 00:45:04.653312 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-07 00:45:04.653323 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-07 00:45:04.653346 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-07 00:45:04.653357 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-07 00:45:04.653368 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-07 00:45:04.653379 | orchestrator | 2026-03-07 00:45:04.653389 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:45:04.653404 | orchestrator | Saturday 07 March 2026 00:45:00 +0000 (0:00:00.449) 0:00:53.569 ******** 2026-03-07 00:45:04.653415 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:04.653430 | orchestrator | 2026-03-07 00:45:04.653454 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:45:04.653479 | orchestrator | Saturday 07 March 2026 00:45:00 +0000 (0:00:00.210) 0:00:53.779 ******** 2026-03-07 00:45:04.653497 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:04.653514 | orchestrator | 2026-03-07 00:45:04.653531 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:45:04.653569 | orchestrator | Saturday 07 March 2026 00:45:00 +0000 (0:00:00.199) 0:00:53.979 ******** 2026-03-07 00:45:04.653588 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:04.653606 | orchestrator | 2026-03-07 00:45:04.653624 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:45:04.653642 | orchestrator | Saturday 07 March 2026 00:45:00 +0000 (0:00:00.208) 0:00:54.188 ******** 2026-03-07 00:45:04.653661 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:04.653680 | orchestrator | 2026-03-07 00:45:04.653698 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:45:04.653718 | orchestrator | Saturday 07 March 2026 00:45:00 +0000 (0:00:00.221) 0:00:54.409 ******** 2026-03-07 00:45:04.653730 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:04.653740 | orchestrator | 2026-03-07 00:45:04.653751 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:45:04.653762 | orchestrator | Saturday 07 March 2026 00:45:01 +0000 (0:00:00.621) 0:00:55.030 ******** 2026-03-07 00:45:04.653773 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:04.653784 | orchestrator | 2026-03-07 00:45:04.653794 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:45:04.653805 | orchestrator | Saturday 07 March 2026 00:45:01 +0000 (0:00:00.203) 0:00:55.234 ******** 2026-03-07 00:45:04.653816 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:04.653826 | orchestrator | 2026-03-07 00:45:04.653837 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:45:04.653883 | orchestrator | Saturday 07 March 2026 00:45:01 +0000 (0:00:00.218) 0:00:55.453 ******** 2026-03-07 00:45:04.653894 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:04.653905 | orchestrator | 2026-03-07 00:45:04.653916 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:45:04.653927 | orchestrator | Saturday 07 March 2026 00:45:02 +0000 (0:00:00.214) 0:00:55.667 ******** 2026-03-07 00:45:04.653938 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ea170c0f-a027-4120-b295-61114d65555d) 2026-03-07 00:45:04.653950 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ea170c0f-a027-4120-b295-61114d65555d) 2026-03-07 00:45:04.653961 | orchestrator | 2026-03-07 00:45:04.653972 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:45:04.653983 | orchestrator | Saturday 07 March 2026 00:45:02 +0000 (0:00:00.438) 0:00:56.106 ******** 2026-03-07 00:45:04.654127 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_fc232e22-cf7b-4f47-aee0-37a45820ed30) 2026-03-07 00:45:04.654158 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_fc232e22-cf7b-4f47-aee0-37a45820ed30) 2026-03-07 00:45:04.654178 | orchestrator | 2026-03-07 00:45:04.654198 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:45:04.654240 | orchestrator | Saturday 07 March 2026 00:45:03 +0000 (0:00:00.461) 0:00:56.568 ******** 2026-03-07 00:45:04.654261 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_81cf8acf-ab0c-4c96-8ca2-b696b28e7835) 2026-03-07 00:45:04.654280 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_81cf8acf-ab0c-4c96-8ca2-b696b28e7835) 2026-03-07 00:45:04.654292 | orchestrator | 2026-03-07 00:45:04.654303 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:45:04.654313 | orchestrator | Saturday 07 March 2026 00:45:03 +0000 (0:00:00.432) 0:00:57.001 ******** 2026-03-07 00:45:04.654324 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_fa6ba2e8-2fa1-4496-a2d0-ef7dd6ca5952) 2026-03-07 00:45:04.654335 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_fa6ba2e8-2fa1-4496-a2d0-ef7dd6ca5952) 2026-03-07 00:45:04.654346 | orchestrator | 2026-03-07 00:45:04.654356 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-07 00:45:04.654367 | orchestrator | Saturday 07 March 2026 00:45:03 +0000 (0:00:00.426) 0:00:57.428 ******** 2026-03-07 00:45:04.654378 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-07 00:45:04.654388 | orchestrator | 2026-03-07 00:45:04.654399 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:45:04.654410 | orchestrator | Saturday 07 March 2026 00:45:04 +0000 (0:00:00.329) 0:00:57.757 ******** 2026-03-07 00:45:04.654420 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-07 00:45:04.654431 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-07 00:45:04.654442 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-07 00:45:04.654453 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-07 00:45:04.654463 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-07 00:45:04.654474 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-07 00:45:04.654484 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-07 00:45:04.654495 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-07 00:45:04.654506 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-07 00:45:04.654517 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-07 00:45:04.654528 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-07 00:45:04.654552 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-07 00:45:14.028234 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-07 00:45:14.028352 | orchestrator | 2026-03-07 00:45:14.028369 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:45:14.028383 | orchestrator | Saturday 07 March 2026 00:45:04 +0000 (0:00:00.393) 0:00:58.151 ******** 2026-03-07 00:45:14.028395 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:14.028409 | orchestrator | 2026-03-07 00:45:14.028422 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:45:14.028434 | orchestrator | Saturday 07 March 2026 00:45:04 +0000 (0:00:00.207) 0:00:58.359 ******** 2026-03-07 00:45:14.028446 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:14.028458 | orchestrator | 2026-03-07 00:45:14.028470 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:45:14.028482 | orchestrator | Saturday 07 March 2026 00:45:05 +0000 (0:00:00.661) 0:00:59.021 ******** 2026-03-07 00:45:14.028494 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:14.028531 | orchestrator | 2026-03-07 00:45:14.028543 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:45:14.028554 | orchestrator | Saturday 07 March 2026 00:45:05 +0000 (0:00:00.219) 0:00:59.241 ******** 2026-03-07 00:45:14.028565 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:14.028576 | orchestrator | 2026-03-07 00:45:14.028586 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:45:14.028597 | orchestrator | Saturday 07 March 2026 00:45:05 +0000 (0:00:00.214) 0:00:59.455 ******** 2026-03-07 00:45:14.028608 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:14.028619 | orchestrator | 2026-03-07 00:45:14.028629 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:45:14.028641 | orchestrator | Saturday 07 March 2026 00:45:06 +0000 (0:00:00.222) 0:00:59.678 ******** 2026-03-07 00:45:14.028651 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:14.028662 | orchestrator | 2026-03-07 00:45:14.028673 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:45:14.028684 | orchestrator | Saturday 07 March 2026 00:45:06 +0000 (0:00:00.193) 0:00:59.871 ******** 2026-03-07 00:45:14.028694 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:14.028705 | orchestrator | 2026-03-07 00:45:14.028716 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:45:14.028727 | orchestrator | Saturday 07 March 2026 00:45:06 +0000 (0:00:00.204) 0:01:00.075 ******** 2026-03-07 00:45:14.028738 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:14.028751 | orchestrator | 2026-03-07 00:45:14.028765 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:45:14.028778 | orchestrator | Saturday 07 March 2026 00:45:06 +0000 (0:00:00.207) 0:01:00.283 ******** 2026-03-07 00:45:14.028792 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-07 00:45:14.028820 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-07 00:45:14.028834 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-07 00:45:14.028847 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-07 00:45:14.028860 | orchestrator | 2026-03-07 00:45:14.028874 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:45:14.028887 | orchestrator | Saturday 07 March 2026 00:45:07 +0000 (0:00:00.661) 0:01:00.944 ******** 2026-03-07 00:45:14.028931 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:14.028950 | orchestrator | 2026-03-07 00:45:14.028970 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:45:14.028988 | orchestrator | Saturday 07 March 2026 00:45:07 +0000 (0:00:00.240) 0:01:01.185 ******** 2026-03-07 00:45:14.029009 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:14.029021 | orchestrator | 2026-03-07 00:45:14.029032 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:45:14.029042 | orchestrator | Saturday 07 March 2026 00:45:07 +0000 (0:00:00.203) 0:01:01.389 ******** 2026-03-07 00:45:14.029053 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:14.029064 | orchestrator | 2026-03-07 00:45:14.029075 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-07 00:45:14.029086 | orchestrator | Saturday 07 March 2026 00:45:08 +0000 (0:00:00.187) 0:01:01.576 ******** 2026-03-07 00:45:14.029096 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:14.029107 | orchestrator | 2026-03-07 00:45:14.029117 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-07 00:45:14.029128 | orchestrator | Saturday 07 March 2026 00:45:08 +0000 (0:00:00.203) 0:01:01.780 ******** 2026-03-07 00:45:14.029139 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:14.029149 | orchestrator | 2026-03-07 00:45:14.029161 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-07 00:45:14.029171 | orchestrator | Saturday 07 March 2026 00:45:08 +0000 (0:00:00.371) 0:01:02.151 ******** 2026-03-07 00:45:14.029182 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6dc70d00-a24c-54e3-88f7-ca23e2f9592d'}}) 2026-03-07 00:45:14.029203 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '3960461f-aa79-5447-98f8-9395cd95d2e3'}}) 2026-03-07 00:45:14.029214 | orchestrator | 2026-03-07 00:45:14.029225 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-07 00:45:14.029235 | orchestrator | Saturday 07 March 2026 00:45:08 +0000 (0:00:00.201) 0:01:02.352 ******** 2026-03-07 00:45:14.029247 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-6dc70d00-a24c-54e3-88f7-ca23e2f9592d', 'data_vg': 'ceph-6dc70d00-a24c-54e3-88f7-ca23e2f9592d'}) 2026-03-07 00:45:14.029260 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-3960461f-aa79-5447-98f8-9395cd95d2e3', 'data_vg': 'ceph-3960461f-aa79-5447-98f8-9395cd95d2e3'}) 2026-03-07 00:45:14.029271 | orchestrator | 2026-03-07 00:45:14.029281 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-07 00:45:14.029312 | orchestrator | Saturday 07 March 2026 00:45:10 +0000 (0:00:01.954) 0:01:04.306 ******** 2026-03-07 00:45:14.029324 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6dc70d00-a24c-54e3-88f7-ca23e2f9592d', 'data_vg': 'ceph-6dc70d00-a24c-54e3-88f7-ca23e2f9592d'})  2026-03-07 00:45:14.029336 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3960461f-aa79-5447-98f8-9395cd95d2e3', 'data_vg': 'ceph-3960461f-aa79-5447-98f8-9395cd95d2e3'})  2026-03-07 00:45:14.029346 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:14.029357 | orchestrator | 2026-03-07 00:45:14.029368 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-07 00:45:14.029378 | orchestrator | Saturday 07 March 2026 00:45:10 +0000 (0:00:00.168) 0:01:04.475 ******** 2026-03-07 00:45:14.029390 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-6dc70d00-a24c-54e3-88f7-ca23e2f9592d', 'data_vg': 'ceph-6dc70d00-a24c-54e3-88f7-ca23e2f9592d'}) 2026-03-07 00:45:14.029401 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-3960461f-aa79-5447-98f8-9395cd95d2e3', 'data_vg': 'ceph-3960461f-aa79-5447-98f8-9395cd95d2e3'}) 2026-03-07 00:45:14.029411 | orchestrator | 2026-03-07 00:45:14.029422 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-07 00:45:14.029433 | orchestrator | Saturday 07 March 2026 00:45:12 +0000 (0:00:01.510) 0:01:05.986 ******** 2026-03-07 00:45:14.029443 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6dc70d00-a24c-54e3-88f7-ca23e2f9592d', 'data_vg': 'ceph-6dc70d00-a24c-54e3-88f7-ca23e2f9592d'})  2026-03-07 00:45:14.029454 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3960461f-aa79-5447-98f8-9395cd95d2e3', 'data_vg': 'ceph-3960461f-aa79-5447-98f8-9395cd95d2e3'})  2026-03-07 00:45:14.029465 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:14.029475 | orchestrator | 2026-03-07 00:45:14.029486 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-07 00:45:14.029497 | orchestrator | Saturday 07 March 2026 00:45:12 +0000 (0:00:00.159) 0:01:06.145 ******** 2026-03-07 00:45:14.029508 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:14.029518 | orchestrator | 2026-03-07 00:45:14.029529 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-07 00:45:14.029540 | orchestrator | Saturday 07 March 2026 00:45:12 +0000 (0:00:00.128) 0:01:06.274 ******** 2026-03-07 00:45:14.029559 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6dc70d00-a24c-54e3-88f7-ca23e2f9592d', 'data_vg': 'ceph-6dc70d00-a24c-54e3-88f7-ca23e2f9592d'})  2026-03-07 00:45:14.029583 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3960461f-aa79-5447-98f8-9395cd95d2e3', 'data_vg': 'ceph-3960461f-aa79-5447-98f8-9395cd95d2e3'})  2026-03-07 00:45:14.029602 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:14.029619 | orchestrator | 2026-03-07 00:45:14.029637 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-07 00:45:14.029654 | orchestrator | Saturday 07 March 2026 00:45:12 +0000 (0:00:00.145) 0:01:06.419 ******** 2026-03-07 00:45:14.029682 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:14.029701 | orchestrator | 2026-03-07 00:45:14.029721 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-07 00:45:14.029738 | orchestrator | Saturday 07 March 2026 00:45:13 +0000 (0:00:00.146) 0:01:06.566 ******** 2026-03-07 00:45:14.029755 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6dc70d00-a24c-54e3-88f7-ca23e2f9592d', 'data_vg': 'ceph-6dc70d00-a24c-54e3-88f7-ca23e2f9592d'})  2026-03-07 00:45:14.029766 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3960461f-aa79-5447-98f8-9395cd95d2e3', 'data_vg': 'ceph-3960461f-aa79-5447-98f8-9395cd95d2e3'})  2026-03-07 00:45:14.029777 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:14.029788 | orchestrator | 2026-03-07 00:45:14.029798 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-07 00:45:14.029809 | orchestrator | Saturday 07 March 2026 00:45:13 +0000 (0:00:00.160) 0:01:06.727 ******** 2026-03-07 00:45:14.029819 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:14.029830 | orchestrator | 2026-03-07 00:45:14.029840 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-07 00:45:14.029851 | orchestrator | Saturday 07 March 2026 00:45:13 +0000 (0:00:00.142) 0:01:06.869 ******** 2026-03-07 00:45:14.029861 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6dc70d00-a24c-54e3-88f7-ca23e2f9592d', 'data_vg': 'ceph-6dc70d00-a24c-54e3-88f7-ca23e2f9592d'})  2026-03-07 00:45:14.029872 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3960461f-aa79-5447-98f8-9395cd95d2e3', 'data_vg': 'ceph-3960461f-aa79-5447-98f8-9395cd95d2e3'})  2026-03-07 00:45:14.029883 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:14.029921 | orchestrator | 2026-03-07 00:45:14.029941 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-07 00:45:14.029959 | orchestrator | Saturday 07 March 2026 00:45:13 +0000 (0:00:00.152) 0:01:07.022 ******** 2026-03-07 00:45:14.029976 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:45:14.029993 | orchestrator | 2026-03-07 00:45:14.030010 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-07 00:45:14.030112 | orchestrator | Saturday 07 March 2026 00:45:13 +0000 (0:00:00.349) 0:01:07.372 ******** 2026-03-07 00:45:14.030147 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6dc70d00-a24c-54e3-88f7-ca23e2f9592d', 'data_vg': 'ceph-6dc70d00-a24c-54e3-88f7-ca23e2f9592d'})  2026-03-07 00:45:20.204347 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3960461f-aa79-5447-98f8-9395cd95d2e3', 'data_vg': 'ceph-3960461f-aa79-5447-98f8-9395cd95d2e3'})  2026-03-07 00:45:20.205365 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:20.205423 | orchestrator | 2026-03-07 00:45:20.205444 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-07 00:45:20.205458 | orchestrator | Saturday 07 March 2026 00:45:14 +0000 (0:00:00.165) 0:01:07.537 ******** 2026-03-07 00:45:20.205471 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6dc70d00-a24c-54e3-88f7-ca23e2f9592d', 'data_vg': 'ceph-6dc70d00-a24c-54e3-88f7-ca23e2f9592d'})  2026-03-07 00:45:20.205483 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3960461f-aa79-5447-98f8-9395cd95d2e3', 'data_vg': 'ceph-3960461f-aa79-5447-98f8-9395cd95d2e3'})  2026-03-07 00:45:20.205494 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:20.205505 | orchestrator | 2026-03-07 00:45:20.205516 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-07 00:45:20.205527 | orchestrator | Saturday 07 March 2026 00:45:14 +0000 (0:00:00.170) 0:01:07.708 ******** 2026-03-07 00:45:20.205538 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6dc70d00-a24c-54e3-88f7-ca23e2f9592d', 'data_vg': 'ceph-6dc70d00-a24c-54e3-88f7-ca23e2f9592d'})  2026-03-07 00:45:20.205549 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3960461f-aa79-5447-98f8-9395cd95d2e3', 'data_vg': 'ceph-3960461f-aa79-5447-98f8-9395cd95d2e3'})  2026-03-07 00:45:20.205583 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:20.205594 | orchestrator | 2026-03-07 00:45:20.205605 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-07 00:45:20.205616 | orchestrator | Saturday 07 March 2026 00:45:14 +0000 (0:00:00.163) 0:01:07.871 ******** 2026-03-07 00:45:20.205626 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:20.205637 | orchestrator | 2026-03-07 00:45:20.205648 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-07 00:45:20.205658 | orchestrator | Saturday 07 March 2026 00:45:14 +0000 (0:00:00.142) 0:01:08.013 ******** 2026-03-07 00:45:20.205669 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:20.205679 | orchestrator | 2026-03-07 00:45:20.205690 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-07 00:45:20.205700 | orchestrator | Saturday 07 March 2026 00:45:14 +0000 (0:00:00.134) 0:01:08.148 ******** 2026-03-07 00:45:20.205711 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:20.205721 | orchestrator | 2026-03-07 00:45:20.205732 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-07 00:45:20.205743 | orchestrator | Saturday 07 March 2026 00:45:14 +0000 (0:00:00.151) 0:01:08.299 ******** 2026-03-07 00:45:20.205754 | orchestrator | ok: [testbed-node-5] => { 2026-03-07 00:45:20.205765 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-07 00:45:20.205776 | orchestrator | } 2026-03-07 00:45:20.205788 | orchestrator | 2026-03-07 00:45:20.205798 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-07 00:45:20.205809 | orchestrator | Saturday 07 March 2026 00:45:14 +0000 (0:00:00.141) 0:01:08.440 ******** 2026-03-07 00:45:20.205820 | orchestrator | ok: [testbed-node-5] => { 2026-03-07 00:45:20.205831 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-07 00:45:20.205841 | orchestrator | } 2026-03-07 00:45:20.205852 | orchestrator | 2026-03-07 00:45:20.205863 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-07 00:45:20.205874 | orchestrator | Saturday 07 March 2026 00:45:15 +0000 (0:00:00.132) 0:01:08.573 ******** 2026-03-07 00:45:20.205885 | orchestrator | ok: [testbed-node-5] => { 2026-03-07 00:45:20.205895 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-07 00:45:20.205906 | orchestrator | } 2026-03-07 00:45:20.205917 | orchestrator | 2026-03-07 00:45:20.205954 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-07 00:45:20.205968 | orchestrator | Saturday 07 March 2026 00:45:15 +0000 (0:00:00.147) 0:01:08.721 ******** 2026-03-07 00:45:20.205978 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:45:20.205989 | orchestrator | 2026-03-07 00:45:20.206000 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-07 00:45:20.206010 | orchestrator | Saturday 07 March 2026 00:45:15 +0000 (0:00:00.544) 0:01:09.265 ******** 2026-03-07 00:45:20.206083 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:45:20.206102 | orchestrator | 2026-03-07 00:45:20.206121 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-07 00:45:20.206138 | orchestrator | Saturday 07 March 2026 00:45:16 +0000 (0:00:00.563) 0:01:09.828 ******** 2026-03-07 00:45:20.206156 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:45:20.206176 | orchestrator | 2026-03-07 00:45:20.206190 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-07 00:45:20.206201 | orchestrator | Saturday 07 March 2026 00:45:17 +0000 (0:00:00.750) 0:01:10.579 ******** 2026-03-07 00:45:20.206211 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:45:20.206222 | orchestrator | 2026-03-07 00:45:20.206233 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-07 00:45:20.206275 | orchestrator | Saturday 07 March 2026 00:45:17 +0000 (0:00:00.139) 0:01:10.718 ******** 2026-03-07 00:45:20.206287 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:20.206298 | orchestrator | 2026-03-07 00:45:20.206309 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-07 00:45:20.206332 | orchestrator | Saturday 07 March 2026 00:45:17 +0000 (0:00:00.122) 0:01:10.840 ******** 2026-03-07 00:45:20.206343 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:20.206354 | orchestrator | 2026-03-07 00:45:20.206364 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-07 00:45:20.206375 | orchestrator | Saturday 07 March 2026 00:45:17 +0000 (0:00:00.117) 0:01:10.958 ******** 2026-03-07 00:45:20.206386 | orchestrator | ok: [testbed-node-5] => { 2026-03-07 00:45:20.206397 | orchestrator |  "vgs_report": { 2026-03-07 00:45:20.206408 | orchestrator |  "vg": [] 2026-03-07 00:45:20.206450 | orchestrator |  } 2026-03-07 00:45:20.206467 | orchestrator | } 2026-03-07 00:45:20.206484 | orchestrator | 2026-03-07 00:45:20.206500 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-07 00:45:20.206515 | orchestrator | Saturday 07 March 2026 00:45:17 +0000 (0:00:00.145) 0:01:11.103 ******** 2026-03-07 00:45:20.206531 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:20.206546 | orchestrator | 2026-03-07 00:45:20.206562 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-07 00:45:20.206579 | orchestrator | Saturday 07 March 2026 00:45:17 +0000 (0:00:00.137) 0:01:11.241 ******** 2026-03-07 00:45:20.206596 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:20.206615 | orchestrator | 2026-03-07 00:45:20.206634 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-07 00:45:20.206651 | orchestrator | Saturday 07 March 2026 00:45:17 +0000 (0:00:00.142) 0:01:11.384 ******** 2026-03-07 00:45:20.206671 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:20.206745 | orchestrator | 2026-03-07 00:45:20.206763 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-07 00:45:20.206775 | orchestrator | Saturday 07 March 2026 00:45:18 +0000 (0:00:00.135) 0:01:11.519 ******** 2026-03-07 00:45:20.206786 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:20.206796 | orchestrator | 2026-03-07 00:45:20.206807 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-07 00:45:20.206818 | orchestrator | Saturday 07 March 2026 00:45:18 +0000 (0:00:00.139) 0:01:11.659 ******** 2026-03-07 00:45:20.206828 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:20.206839 | orchestrator | 2026-03-07 00:45:20.206850 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-07 00:45:20.206860 | orchestrator | Saturday 07 March 2026 00:45:18 +0000 (0:00:00.136) 0:01:11.796 ******** 2026-03-07 00:45:20.206871 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:20.206881 | orchestrator | 2026-03-07 00:45:20.206911 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-07 00:45:20.207098 | orchestrator | Saturday 07 March 2026 00:45:18 +0000 (0:00:00.140) 0:01:11.936 ******** 2026-03-07 00:45:20.207122 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:20.207133 | orchestrator | 2026-03-07 00:45:20.207144 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-07 00:45:20.207155 | orchestrator | Saturday 07 March 2026 00:45:18 +0000 (0:00:00.127) 0:01:12.064 ******** 2026-03-07 00:45:20.207165 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:20.207176 | orchestrator | 2026-03-07 00:45:20.207187 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-07 00:45:20.207197 | orchestrator | Saturday 07 March 2026 00:45:18 +0000 (0:00:00.308) 0:01:12.372 ******** 2026-03-07 00:45:20.207207 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:20.207217 | orchestrator | 2026-03-07 00:45:20.207234 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-07 00:45:20.207244 | orchestrator | Saturday 07 March 2026 00:45:18 +0000 (0:00:00.133) 0:01:12.506 ******** 2026-03-07 00:45:20.207253 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:20.207263 | orchestrator | 2026-03-07 00:45:20.207272 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-07 00:45:20.207282 | orchestrator | Saturday 07 March 2026 00:45:19 +0000 (0:00:00.134) 0:01:12.640 ******** 2026-03-07 00:45:20.207302 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:20.207312 | orchestrator | 2026-03-07 00:45:20.207321 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-07 00:45:20.207331 | orchestrator | Saturday 07 March 2026 00:45:19 +0000 (0:00:00.134) 0:01:12.775 ******** 2026-03-07 00:45:20.207341 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:20.207350 | orchestrator | 2026-03-07 00:45:20.207360 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-07 00:45:20.207369 | orchestrator | Saturday 07 March 2026 00:45:19 +0000 (0:00:00.131) 0:01:12.907 ******** 2026-03-07 00:45:20.207379 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:20.207388 | orchestrator | 2026-03-07 00:45:20.207398 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-07 00:45:20.207407 | orchestrator | Saturday 07 March 2026 00:45:19 +0000 (0:00:00.147) 0:01:13.054 ******** 2026-03-07 00:45:20.207417 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:20.207426 | orchestrator | 2026-03-07 00:45:20.207436 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-07 00:45:20.207445 | orchestrator | Saturday 07 March 2026 00:45:19 +0000 (0:00:00.145) 0:01:13.199 ******** 2026-03-07 00:45:20.207455 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6dc70d00-a24c-54e3-88f7-ca23e2f9592d', 'data_vg': 'ceph-6dc70d00-a24c-54e3-88f7-ca23e2f9592d'})  2026-03-07 00:45:20.207465 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3960461f-aa79-5447-98f8-9395cd95d2e3', 'data_vg': 'ceph-3960461f-aa79-5447-98f8-9395cd95d2e3'})  2026-03-07 00:45:20.207475 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:20.207484 | orchestrator | 2026-03-07 00:45:20.207494 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-07 00:45:20.207503 | orchestrator | Saturday 07 March 2026 00:45:19 +0000 (0:00:00.175) 0:01:13.375 ******** 2026-03-07 00:45:20.207513 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6dc70d00-a24c-54e3-88f7-ca23e2f9592d', 'data_vg': 'ceph-6dc70d00-a24c-54e3-88f7-ca23e2f9592d'})  2026-03-07 00:45:20.207523 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3960461f-aa79-5447-98f8-9395cd95d2e3', 'data_vg': 'ceph-3960461f-aa79-5447-98f8-9395cd95d2e3'})  2026-03-07 00:45:20.207532 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:20.207542 | orchestrator | 2026-03-07 00:45:20.207551 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-07 00:45:20.207561 | orchestrator | Saturday 07 March 2026 00:45:20 +0000 (0:00:00.156) 0:01:13.531 ******** 2026-03-07 00:45:20.207586 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6dc70d00-a24c-54e3-88f7-ca23e2f9592d', 'data_vg': 'ceph-6dc70d00-a24c-54e3-88f7-ca23e2f9592d'})  2026-03-07 00:45:23.264813 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3960461f-aa79-5447-98f8-9395cd95d2e3', 'data_vg': 'ceph-3960461f-aa79-5447-98f8-9395cd95d2e3'})  2026-03-07 00:45:23.265071 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:23.265135 | orchestrator | 2026-03-07 00:45:23.265159 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-07 00:45:23.265181 | orchestrator | Saturday 07 March 2026 00:45:20 +0000 (0:00:00.182) 0:01:13.714 ******** 2026-03-07 00:45:23.265202 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6dc70d00-a24c-54e3-88f7-ca23e2f9592d', 'data_vg': 'ceph-6dc70d00-a24c-54e3-88f7-ca23e2f9592d'})  2026-03-07 00:45:23.265221 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3960461f-aa79-5447-98f8-9395cd95d2e3', 'data_vg': 'ceph-3960461f-aa79-5447-98f8-9395cd95d2e3'})  2026-03-07 00:45:23.265241 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:23.265261 | orchestrator | 2026-03-07 00:45:23.265281 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-07 00:45:23.265299 | orchestrator | Saturday 07 March 2026 00:45:20 +0000 (0:00:00.157) 0:01:13.872 ******** 2026-03-07 00:45:23.265351 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6dc70d00-a24c-54e3-88f7-ca23e2f9592d', 'data_vg': 'ceph-6dc70d00-a24c-54e3-88f7-ca23e2f9592d'})  2026-03-07 00:45:23.265372 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3960461f-aa79-5447-98f8-9395cd95d2e3', 'data_vg': 'ceph-3960461f-aa79-5447-98f8-9395cd95d2e3'})  2026-03-07 00:45:23.265393 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:23.265413 | orchestrator | 2026-03-07 00:45:23.265433 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-07 00:45:23.265454 | orchestrator | Saturday 07 March 2026 00:45:20 +0000 (0:00:00.165) 0:01:14.037 ******** 2026-03-07 00:45:23.265475 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6dc70d00-a24c-54e3-88f7-ca23e2f9592d', 'data_vg': 'ceph-6dc70d00-a24c-54e3-88f7-ca23e2f9592d'})  2026-03-07 00:45:23.265497 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3960461f-aa79-5447-98f8-9395cd95d2e3', 'data_vg': 'ceph-3960461f-aa79-5447-98f8-9395cd95d2e3'})  2026-03-07 00:45:23.265538 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:23.265560 | orchestrator | 2026-03-07 00:45:23.265581 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-07 00:45:23.265601 | orchestrator | Saturday 07 March 2026 00:45:20 +0000 (0:00:00.343) 0:01:14.380 ******** 2026-03-07 00:45:23.265622 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6dc70d00-a24c-54e3-88f7-ca23e2f9592d', 'data_vg': 'ceph-6dc70d00-a24c-54e3-88f7-ca23e2f9592d'})  2026-03-07 00:45:23.265643 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3960461f-aa79-5447-98f8-9395cd95d2e3', 'data_vg': 'ceph-3960461f-aa79-5447-98f8-9395cd95d2e3'})  2026-03-07 00:45:23.265664 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:23.265684 | orchestrator | 2026-03-07 00:45:23.265703 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-07 00:45:23.265723 | orchestrator | Saturday 07 March 2026 00:45:21 +0000 (0:00:00.166) 0:01:14.547 ******** 2026-03-07 00:45:23.265743 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6dc70d00-a24c-54e3-88f7-ca23e2f9592d', 'data_vg': 'ceph-6dc70d00-a24c-54e3-88f7-ca23e2f9592d'})  2026-03-07 00:45:23.265761 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3960461f-aa79-5447-98f8-9395cd95d2e3', 'data_vg': 'ceph-3960461f-aa79-5447-98f8-9395cd95d2e3'})  2026-03-07 00:45:23.265780 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:23.265800 | orchestrator | 2026-03-07 00:45:23.265819 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-07 00:45:23.265838 | orchestrator | Saturday 07 March 2026 00:45:21 +0000 (0:00:00.157) 0:01:14.705 ******** 2026-03-07 00:45:23.265857 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:45:23.265877 | orchestrator | 2026-03-07 00:45:23.265896 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-07 00:45:23.265915 | orchestrator | Saturday 07 March 2026 00:45:21 +0000 (0:00:00.523) 0:01:15.228 ******** 2026-03-07 00:45:23.265935 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:45:23.265981 | orchestrator | 2026-03-07 00:45:23.265999 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-07 00:45:23.266096 | orchestrator | Saturday 07 March 2026 00:45:22 +0000 (0:00:00.537) 0:01:15.765 ******** 2026-03-07 00:45:23.266119 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:45:23.266136 | orchestrator | 2026-03-07 00:45:23.266154 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-07 00:45:23.266171 | orchestrator | Saturday 07 March 2026 00:45:22 +0000 (0:00:00.142) 0:01:15.908 ******** 2026-03-07 00:45:23.266188 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-3960461f-aa79-5447-98f8-9395cd95d2e3', 'vg_name': 'ceph-3960461f-aa79-5447-98f8-9395cd95d2e3'}) 2026-03-07 00:45:23.266208 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-6dc70d00-a24c-54e3-88f7-ca23e2f9592d', 'vg_name': 'ceph-6dc70d00-a24c-54e3-88f7-ca23e2f9592d'}) 2026-03-07 00:45:23.266241 | orchestrator | 2026-03-07 00:45:23.266260 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-07 00:45:23.266279 | orchestrator | Saturday 07 March 2026 00:45:22 +0000 (0:00:00.173) 0:01:16.082 ******** 2026-03-07 00:45:23.266327 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6dc70d00-a24c-54e3-88f7-ca23e2f9592d', 'data_vg': 'ceph-6dc70d00-a24c-54e3-88f7-ca23e2f9592d'})  2026-03-07 00:45:23.266346 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3960461f-aa79-5447-98f8-9395cd95d2e3', 'data_vg': 'ceph-3960461f-aa79-5447-98f8-9395cd95d2e3'})  2026-03-07 00:45:23.266365 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:23.266384 | orchestrator | 2026-03-07 00:45:23.266401 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-07 00:45:23.266419 | orchestrator | Saturday 07 March 2026 00:45:22 +0000 (0:00:00.162) 0:01:16.244 ******** 2026-03-07 00:45:23.266439 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6dc70d00-a24c-54e3-88f7-ca23e2f9592d', 'data_vg': 'ceph-6dc70d00-a24c-54e3-88f7-ca23e2f9592d'})  2026-03-07 00:45:23.266458 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3960461f-aa79-5447-98f8-9395cd95d2e3', 'data_vg': 'ceph-3960461f-aa79-5447-98f8-9395cd95d2e3'})  2026-03-07 00:45:23.266478 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:23.266497 | orchestrator | 2026-03-07 00:45:23.266516 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-07 00:45:23.266535 | orchestrator | Saturday 07 March 2026 00:45:22 +0000 (0:00:00.165) 0:01:16.410 ******** 2026-03-07 00:45:23.266555 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6dc70d00-a24c-54e3-88f7-ca23e2f9592d', 'data_vg': 'ceph-6dc70d00-a24c-54e3-88f7-ca23e2f9592d'})  2026-03-07 00:45:23.266575 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3960461f-aa79-5447-98f8-9395cd95d2e3', 'data_vg': 'ceph-3960461f-aa79-5447-98f8-9395cd95d2e3'})  2026-03-07 00:45:23.266594 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:23.266614 | orchestrator | 2026-03-07 00:45:23.266634 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-07 00:45:23.266653 | orchestrator | Saturday 07 March 2026 00:45:23 +0000 (0:00:00.169) 0:01:16.579 ******** 2026-03-07 00:45:23.266673 | orchestrator | ok: [testbed-node-5] => { 2026-03-07 00:45:23.266693 | orchestrator |  "lvm_report": { 2026-03-07 00:45:23.266713 | orchestrator |  "lv": [ 2026-03-07 00:45:23.266732 | orchestrator |  { 2026-03-07 00:45:23.266752 | orchestrator |  "lv_name": "osd-block-3960461f-aa79-5447-98f8-9395cd95d2e3", 2026-03-07 00:45:23.266785 | orchestrator |  "vg_name": "ceph-3960461f-aa79-5447-98f8-9395cd95d2e3" 2026-03-07 00:45:23.266805 | orchestrator |  }, 2026-03-07 00:45:23.266823 | orchestrator |  { 2026-03-07 00:45:23.266841 | orchestrator |  "lv_name": "osd-block-6dc70d00-a24c-54e3-88f7-ca23e2f9592d", 2026-03-07 00:45:23.266859 | orchestrator |  "vg_name": "ceph-6dc70d00-a24c-54e3-88f7-ca23e2f9592d" 2026-03-07 00:45:23.266876 | orchestrator |  } 2026-03-07 00:45:23.266894 | orchestrator |  ], 2026-03-07 00:45:23.266911 | orchestrator |  "pv": [ 2026-03-07 00:45:23.266929 | orchestrator |  { 2026-03-07 00:45:23.266973 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-07 00:45:23.266993 | orchestrator |  "vg_name": "ceph-6dc70d00-a24c-54e3-88f7-ca23e2f9592d" 2026-03-07 00:45:23.267010 | orchestrator |  }, 2026-03-07 00:45:23.267028 | orchestrator |  { 2026-03-07 00:45:23.267047 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-07 00:45:23.267065 | orchestrator |  "vg_name": "ceph-3960461f-aa79-5447-98f8-9395cd95d2e3" 2026-03-07 00:45:23.267084 | orchestrator |  } 2026-03-07 00:45:23.267102 | orchestrator |  ] 2026-03-07 00:45:23.267120 | orchestrator |  } 2026-03-07 00:45:23.267138 | orchestrator | } 2026-03-07 00:45:23.267173 | orchestrator | 2026-03-07 00:45:23.267192 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:45:23.267210 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-07 00:45:23.267229 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-07 00:45:23.267247 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-07 00:45:23.267265 | orchestrator | 2026-03-07 00:45:23.267284 | orchestrator | 2026-03-07 00:45:23.267302 | orchestrator | 2026-03-07 00:45:23.267320 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:45:23.267338 | orchestrator | Saturday 07 March 2026 00:45:23 +0000 (0:00:00.169) 0:01:16.749 ******** 2026-03-07 00:45:23.267354 | orchestrator | =============================================================================== 2026-03-07 00:45:23.267370 | orchestrator | Create block VGs -------------------------------------------------------- 5.95s 2026-03-07 00:45:23.267385 | orchestrator | Create block LVs -------------------------------------------------------- 4.36s 2026-03-07 00:45:23.267403 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.92s 2026-03-07 00:45:23.267420 | orchestrator | Add known partitions to the list of available block devices ------------- 1.90s 2026-03-07 00:45:23.267437 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.83s 2026-03-07 00:45:23.267454 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.72s 2026-03-07 00:45:23.267472 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.64s 2026-03-07 00:45:23.267489 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.58s 2026-03-07 00:45:23.267520 | orchestrator | Add known links to the list of available block devices ------------------ 1.31s 2026-03-07 00:45:23.715033 | orchestrator | Add known partitions to the list of available block devices ------------- 1.18s 2026-03-07 00:45:23.715142 | orchestrator | Print LVM report data --------------------------------------------------- 0.99s 2026-03-07 00:45:23.715159 | orchestrator | Add known links to the list of available block devices ------------------ 0.92s 2026-03-07 00:45:23.715171 | orchestrator | Add known partitions to the list of available block devices ------------- 0.89s 2026-03-07 00:45:23.715182 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.79s 2026-03-07 00:45:23.715193 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.78s 2026-03-07 00:45:23.715204 | orchestrator | Create WAL LVs for ceph_wal_devices ------------------------------------- 0.77s 2026-03-07 00:45:23.715215 | orchestrator | Create DB LVs for ceph_db_devices --------------------------------------- 0.76s 2026-03-07 00:45:23.715225 | orchestrator | Add known links to the list of available block devices ------------------ 0.71s 2026-03-07 00:45:23.715236 | orchestrator | Add known links to the list of available block devices ------------------ 0.70s 2026-03-07 00:45:23.715247 | orchestrator | Print 'Create WAL VGs' -------------------------------------------------- 0.70s 2026-03-07 00:45:36.255517 | orchestrator | 2026-03-07 00:45:36 | INFO  | Task e998f3ae-2d7c-438e-9c16-29ee8b49db2f (facts) was prepared for execution. 2026-03-07 00:45:36.255638 | orchestrator | 2026-03-07 00:45:36 | INFO  | It takes a moment until task e998f3ae-2d7c-438e-9c16-29ee8b49db2f (facts) has been started and output is visible here. 2026-03-07 00:45:48.380586 | orchestrator | 2026-03-07 00:45:48.380703 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-07 00:45:48.380717 | orchestrator | 2026-03-07 00:45:48.380727 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-07 00:45:48.380736 | orchestrator | Saturday 07 March 2026 00:45:40 +0000 (0:00:00.274) 0:00:00.274 ******** 2026-03-07 00:45:48.380770 | orchestrator | ok: [testbed-manager] 2026-03-07 00:45:48.380782 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:45:48.380790 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:45:48.380798 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:45:48.380806 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:45:48.380814 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:45:48.380822 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:45:48.380830 | orchestrator | 2026-03-07 00:45:48.380838 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-07 00:45:48.380861 | orchestrator | Saturday 07 March 2026 00:45:41 +0000 (0:00:01.109) 0:00:01.384 ******** 2026-03-07 00:45:48.380870 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:45:48.380879 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:45:48.380887 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:45:48.380896 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:45:48.380903 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:45:48.380911 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:45:48.380918 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:48.380927 | orchestrator | 2026-03-07 00:45:48.380935 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-07 00:45:48.380943 | orchestrator | 2026-03-07 00:45:48.380951 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-07 00:45:48.380959 | orchestrator | Saturday 07 March 2026 00:45:42 +0000 (0:00:01.213) 0:00:02.598 ******** 2026-03-07 00:45:48.380966 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:45:48.380974 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:45:48.380982 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:45:48.380990 | orchestrator | ok: [testbed-manager] 2026-03-07 00:45:48.380998 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:45:48.381006 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:45:48.381013 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:45:48.381021 | orchestrator | 2026-03-07 00:45:48.381029 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-07 00:45:48.381037 | orchestrator | 2026-03-07 00:45:48.381045 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-07 00:45:48.381053 | orchestrator | Saturday 07 March 2026 00:45:47 +0000 (0:00:04.576) 0:00:07.175 ******** 2026-03-07 00:45:48.381061 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:45:48.381070 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:45:48.381078 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:45:48.381118 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:45:48.381126 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:45:48.381134 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:45:48.381142 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:45:48.381150 | orchestrator | 2026-03-07 00:45:48.381159 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:45:48.381167 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:45:48.381178 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:45:48.381187 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:45:48.381195 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:45:48.381203 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:45:48.381212 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:45:48.381220 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:45:48.381235 | orchestrator | 2026-03-07 00:45:48.381244 | orchestrator | 2026-03-07 00:45:48.381252 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:45:48.381261 | orchestrator | Saturday 07 March 2026 00:45:47 +0000 (0:00:00.512) 0:00:07.687 ******** 2026-03-07 00:45:48.381270 | orchestrator | =============================================================================== 2026-03-07 00:45:48.381278 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.58s 2026-03-07 00:45:48.381287 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.21s 2026-03-07 00:45:48.381295 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.11s 2026-03-07 00:45:48.381303 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.51s 2026-03-07 00:46:00.867944 | orchestrator | 2026-03-07 00:46:00 | INFO  | Task a68ea232-fb64-4d7d-96d3-35d133fa49db (frr) was prepared for execution. 2026-03-07 00:46:00.868058 | orchestrator | 2026-03-07 00:46:00 | INFO  | It takes a moment until task a68ea232-fb64-4d7d-96d3-35d133fa49db (frr) has been started and output is visible here. 2026-03-07 00:46:27.419029 | orchestrator | 2026-03-07 00:46:27.419125 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-03-07 00:46:27.419137 | orchestrator | 2026-03-07 00:46:27.419146 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-03-07 00:46:27.419154 | orchestrator | Saturday 07 March 2026 00:46:05 +0000 (0:00:00.238) 0:00:00.238 ******** 2026-03-07 00:46:27.419162 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-03-07 00:46:27.419171 | orchestrator | 2026-03-07 00:46:27.419179 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-03-07 00:46:27.419186 | orchestrator | Saturday 07 March 2026 00:46:05 +0000 (0:00:00.218) 0:00:00.457 ******** 2026-03-07 00:46:27.419194 | orchestrator | changed: [testbed-manager] 2026-03-07 00:46:27.419202 | orchestrator | 2026-03-07 00:46:27.419209 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-03-07 00:46:27.419217 | orchestrator | Saturday 07 March 2026 00:46:06 +0000 (0:00:01.189) 0:00:01.647 ******** 2026-03-07 00:46:27.419225 | orchestrator | changed: [testbed-manager] 2026-03-07 00:46:27.419233 | orchestrator | 2026-03-07 00:46:27.419245 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-03-07 00:46:27.419257 | orchestrator | Saturday 07 March 2026 00:46:16 +0000 (0:00:09.879) 0:00:11.526 ******** 2026-03-07 00:46:27.419269 | orchestrator | ok: [testbed-manager] 2026-03-07 00:46:27.419329 | orchestrator | 2026-03-07 00:46:27.419343 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-03-07 00:46:27.419355 | orchestrator | Saturday 07 March 2026 00:46:17 +0000 (0:00:01.039) 0:00:12.565 ******** 2026-03-07 00:46:27.419366 | orchestrator | changed: [testbed-manager] 2026-03-07 00:46:27.419377 | orchestrator | 2026-03-07 00:46:27.419389 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-03-07 00:46:27.419401 | orchestrator | Saturday 07 March 2026 00:46:18 +0000 (0:00:01.014) 0:00:13.579 ******** 2026-03-07 00:46:27.419412 | orchestrator | ok: [testbed-manager] 2026-03-07 00:46:27.419423 | orchestrator | 2026-03-07 00:46:27.419435 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-03-07 00:46:27.419448 | orchestrator | Saturday 07 March 2026 00:46:19 +0000 (0:00:01.268) 0:00:14.848 ******** 2026-03-07 00:46:27.419461 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:46:27.419469 | orchestrator | 2026-03-07 00:46:27.419477 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-03-07 00:46:27.419484 | orchestrator | Saturday 07 March 2026 00:46:19 +0000 (0:00:00.139) 0:00:14.988 ******** 2026-03-07 00:46:27.419509 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:46:27.419536 | orchestrator | 2026-03-07 00:46:27.419544 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-03-07 00:46:27.419552 | orchestrator | Saturday 07 March 2026 00:46:19 +0000 (0:00:00.178) 0:00:15.167 ******** 2026-03-07 00:46:27.419559 | orchestrator | changed: [testbed-manager] 2026-03-07 00:46:27.419567 | orchestrator | 2026-03-07 00:46:27.419574 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-03-07 00:46:27.419583 | orchestrator | Saturday 07 March 2026 00:46:21 +0000 (0:00:01.060) 0:00:16.227 ******** 2026-03-07 00:46:27.419592 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-03-07 00:46:27.419600 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-03-07 00:46:27.419610 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-03-07 00:46:27.419619 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-03-07 00:46:27.419627 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-03-07 00:46:27.419636 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-03-07 00:46:27.419644 | orchestrator | 2026-03-07 00:46:27.419652 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-03-07 00:46:27.419661 | orchestrator | Saturday 07 March 2026 00:46:24 +0000 (0:00:03.349) 0:00:19.577 ******** 2026-03-07 00:46:27.419669 | orchestrator | ok: [testbed-manager] 2026-03-07 00:46:27.419678 | orchestrator | 2026-03-07 00:46:27.419687 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-03-07 00:46:27.419695 | orchestrator | Saturday 07 March 2026 00:46:25 +0000 (0:00:01.433) 0:00:21.011 ******** 2026-03-07 00:46:27.419703 | orchestrator | changed: [testbed-manager] 2026-03-07 00:46:27.419712 | orchestrator | 2026-03-07 00:46:27.419720 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:46:27.419729 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:46:27.419738 | orchestrator | 2026-03-07 00:46:27.419746 | orchestrator | 2026-03-07 00:46:27.419755 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:46:27.419763 | orchestrator | Saturday 07 March 2026 00:46:27 +0000 (0:00:01.354) 0:00:22.365 ******** 2026-03-07 00:46:27.419771 | orchestrator | =============================================================================== 2026-03-07 00:46:27.419780 | orchestrator | osism.services.frr : Install frr package -------------------------------- 9.88s 2026-03-07 00:46:27.419788 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 3.35s 2026-03-07 00:46:27.419796 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.43s 2026-03-07 00:46:27.419804 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.36s 2026-03-07 00:46:27.419812 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.27s 2026-03-07 00:46:27.419837 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.19s 2026-03-07 00:46:27.419845 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.06s 2026-03-07 00:46:27.419853 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.04s 2026-03-07 00:46:27.419862 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.01s 2026-03-07 00:46:27.419870 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.22s 2026-03-07 00:46:27.419878 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.18s 2026-03-07 00:46:27.419887 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.14s 2026-03-07 00:46:27.636818 | orchestrator | 2026-03-07 00:46:27.637568 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Sat Mar 7 00:46:27 UTC 2026 2026-03-07 00:46:27.637604 | orchestrator | 2026-03-07 00:46:29.393801 | orchestrator | 2026-03-07 00:46:29 | INFO  | Collection nutshell is prepared for execution 2026-03-07 00:46:29.394915 | orchestrator | 2026-03-07 00:46:29 | INFO  | A [0] - dotfiles 2026-03-07 00:46:39.463060 | orchestrator | 2026-03-07 00:46:39 | INFO  | A [0] - homer 2026-03-07 00:46:39.463144 | orchestrator | 2026-03-07 00:46:39 | INFO  | A [0] - netdata 2026-03-07 00:46:39.463152 | orchestrator | 2026-03-07 00:46:39 | INFO  | A [0] - openstackclient 2026-03-07 00:46:39.463157 | orchestrator | 2026-03-07 00:46:39 | INFO  | A [0] - phpmyadmin 2026-03-07 00:46:39.463162 | orchestrator | 2026-03-07 00:46:39 | INFO  | A [0] - common 2026-03-07 00:46:39.468129 | orchestrator | 2026-03-07 00:46:39 | INFO  | A [1] -- loadbalancer 2026-03-07 00:46:39.468277 | orchestrator | 2026-03-07 00:46:39 | INFO  | A [2] --- opensearch 2026-03-07 00:46:39.468306 | orchestrator | 2026-03-07 00:46:39 | INFO  | A [2] --- mariadb-ng 2026-03-07 00:46:39.468312 | orchestrator | 2026-03-07 00:46:39 | INFO  | A [3] ---- horizon 2026-03-07 00:46:39.468317 | orchestrator | 2026-03-07 00:46:39 | INFO  | A [3] ---- keystone 2026-03-07 00:46:39.468322 | orchestrator | 2026-03-07 00:46:39 | INFO  | A [4] ----- neutron 2026-03-07 00:46:39.468335 | orchestrator | 2026-03-07 00:46:39 | INFO  | A [5] ------ wait-for-nova 2026-03-07 00:46:39.468527 | orchestrator | 2026-03-07 00:46:39 | INFO  | A [6] ------- octavia 2026-03-07 00:46:39.469961 | orchestrator | 2026-03-07 00:46:39 | INFO  | A [4] ----- barbican 2026-03-07 00:46:39.470058 | orchestrator | 2026-03-07 00:46:39 | INFO  | A [4] ----- designate 2026-03-07 00:46:39.470278 | orchestrator | 2026-03-07 00:46:39 | INFO  | A [4] ----- ironic 2026-03-07 00:46:39.470884 | orchestrator | 2026-03-07 00:46:39 | INFO  | A [4] ----- placement 2026-03-07 00:46:39.470904 | orchestrator | 2026-03-07 00:46:39 | INFO  | A [4] ----- magnum 2026-03-07 00:46:39.471220 | orchestrator | 2026-03-07 00:46:39 | INFO  | A [1] -- openvswitch 2026-03-07 00:46:39.471494 | orchestrator | 2026-03-07 00:46:39 | INFO  | A [2] --- ovn 2026-03-07 00:46:39.471953 | orchestrator | 2026-03-07 00:46:39 | INFO  | A [1] -- memcached 2026-03-07 00:46:39.472246 | orchestrator | 2026-03-07 00:46:39 | INFO  | A [1] -- redis 2026-03-07 00:46:39.472710 | orchestrator | 2026-03-07 00:46:39 | INFO  | A [1] -- rabbitmq-ng 2026-03-07 00:46:39.474009 | orchestrator | 2026-03-07 00:46:39 | INFO  | A [0] - kubernetes 2026-03-07 00:46:39.476149 | orchestrator | 2026-03-07 00:46:39 | INFO  | A [1] -- kubeconfig 2026-03-07 00:46:39.476186 | orchestrator | 2026-03-07 00:46:39 | INFO  | A [1] -- copy-kubeconfig 2026-03-07 00:46:39.476817 | orchestrator | 2026-03-07 00:46:39 | INFO  | A [0] - ceph 2026-03-07 00:46:39.478943 | orchestrator | 2026-03-07 00:46:39 | INFO  | A [1] -- ceph-pools 2026-03-07 00:46:39.478977 | orchestrator | 2026-03-07 00:46:39 | INFO  | A [2] --- copy-ceph-keys 2026-03-07 00:46:39.479172 | orchestrator | 2026-03-07 00:46:39 | INFO  | A [3] ---- cephclient 2026-03-07 00:46:39.479433 | orchestrator | 2026-03-07 00:46:39 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-03-07 00:46:39.479586 | orchestrator | 2026-03-07 00:46:39 | INFO  | A [4] ----- wait-for-keystone 2026-03-07 00:46:39.482074 | orchestrator | 2026-03-07 00:46:39 | INFO  | A [5] ------ kolla-ceph-rgw 2026-03-07 00:46:39.482124 | orchestrator | 2026-03-07 00:46:39 | INFO  | A [5] ------ glance 2026-03-07 00:46:39.482137 | orchestrator | 2026-03-07 00:46:39 | INFO  | A [5] ------ cinder 2026-03-07 00:46:39.482181 | orchestrator | 2026-03-07 00:46:39 | INFO  | A [5] ------ nova 2026-03-07 00:46:39.482192 | orchestrator | 2026-03-07 00:46:39 | INFO  | A [4] ----- prometheus 2026-03-07 00:46:39.482204 | orchestrator | 2026-03-07 00:46:39 | INFO  | A [5] ------ grafana 2026-03-07 00:46:39.727903 | orchestrator | 2026-03-07 00:46:39 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-03-07 00:46:39.728000 | orchestrator | 2026-03-07 00:46:39 | INFO  | Tasks are running in the background 2026-03-07 00:46:43.360677 | orchestrator | 2026-03-07 00:46:43 | INFO  | No task IDs specified, wait for all currently running tasks 2026-03-07 00:46:45.506768 | orchestrator | 2026-03-07 00:46:45 | INFO  | Task f5a164ab-7456-4bb3-8d79-e9b9e38ef924 is in state STARTED 2026-03-07 00:46:45.507100 | orchestrator | 2026-03-07 00:46:45 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:46:45.507860 | orchestrator | 2026-03-07 00:46:45 | INFO  | Task efe50778-ce78-4d87-b6f1-cf799ecd61c0 is in state STARTED 2026-03-07 00:46:45.508528 | orchestrator | 2026-03-07 00:46:45 | INFO  | Task dd3f373f-2a56-4f50-bfe7-de239a7389dd is in state STARTED 2026-03-07 00:46:45.509648 | orchestrator | 2026-03-07 00:46:45 | INFO  | Task b47c6774-13c8-4e0c-b83d-d71af9fa89d7 is in state STARTED 2026-03-07 00:46:45.510100 | orchestrator | 2026-03-07 00:46:45 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:46:45.515804 | orchestrator | 2026-03-07 00:46:45 | INFO  | Task 2dec6541-ba43-460e-bb95-aab2e64f6fb9 is in state STARTED 2026-03-07 00:46:45.515882 | orchestrator | 2026-03-07 00:46:45 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:46:48.618818 | orchestrator | 2026-03-07 00:46:48 | INFO  | Task f5a164ab-7456-4bb3-8d79-e9b9e38ef924 is in state STARTED 2026-03-07 00:46:48.619046 | orchestrator | 2026-03-07 00:46:48 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:46:48.620032 | orchestrator | 2026-03-07 00:46:48 | INFO  | Task efe50778-ce78-4d87-b6f1-cf799ecd61c0 is in state STARTED 2026-03-07 00:46:48.620337 | orchestrator | 2026-03-07 00:46:48 | INFO  | Task dd3f373f-2a56-4f50-bfe7-de239a7389dd is in state STARTED 2026-03-07 00:46:48.621414 | orchestrator | 2026-03-07 00:46:48 | INFO  | Task b47c6774-13c8-4e0c-b83d-d71af9fa89d7 is in state STARTED 2026-03-07 00:46:48.621746 | orchestrator | 2026-03-07 00:46:48 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:46:48.622624 | orchestrator | 2026-03-07 00:46:48 | INFO  | Task 2dec6541-ba43-460e-bb95-aab2e64f6fb9 is in state STARTED 2026-03-07 00:46:48.622670 | orchestrator | 2026-03-07 00:46:48 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:46:51.668847 | orchestrator | 2026-03-07 00:46:51 | INFO  | Task f5a164ab-7456-4bb3-8d79-e9b9e38ef924 is in state STARTED 2026-03-07 00:46:51.671508 | orchestrator | 2026-03-07 00:46:51 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:46:51.671761 | orchestrator | 2026-03-07 00:46:51 | INFO  | Task efe50778-ce78-4d87-b6f1-cf799ecd61c0 is in state STARTED 2026-03-07 00:46:51.672369 | orchestrator | 2026-03-07 00:46:51 | INFO  | Task dd3f373f-2a56-4f50-bfe7-de239a7389dd is in state STARTED 2026-03-07 00:46:51.673744 | orchestrator | 2026-03-07 00:46:51 | INFO  | Task b47c6774-13c8-4e0c-b83d-d71af9fa89d7 is in state STARTED 2026-03-07 00:46:51.674111 | orchestrator | 2026-03-07 00:46:51 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:46:51.674867 | orchestrator | 2026-03-07 00:46:51 | INFO  | Task 2dec6541-ba43-460e-bb95-aab2e64f6fb9 is in state STARTED 2026-03-07 00:46:51.674929 | orchestrator | 2026-03-07 00:46:51 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:46:54.774004 | orchestrator | 2026-03-07 00:46:54 | INFO  | Task f5a164ab-7456-4bb3-8d79-e9b9e38ef924 is in state STARTED 2026-03-07 00:46:54.774115 | orchestrator | 2026-03-07 00:46:54 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:46:54.774126 | orchestrator | 2026-03-07 00:46:54 | INFO  | Task efe50778-ce78-4d87-b6f1-cf799ecd61c0 is in state STARTED 2026-03-07 00:46:54.774133 | orchestrator | 2026-03-07 00:46:54 | INFO  | Task dd3f373f-2a56-4f50-bfe7-de239a7389dd is in state STARTED 2026-03-07 00:46:54.774140 | orchestrator | 2026-03-07 00:46:54 | INFO  | Task b47c6774-13c8-4e0c-b83d-d71af9fa89d7 is in state STARTED 2026-03-07 00:46:54.774146 | orchestrator | 2026-03-07 00:46:54 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:46:54.774152 | orchestrator | 2026-03-07 00:46:54 | INFO  | Task 2dec6541-ba43-460e-bb95-aab2e64f6fb9 is in state STARTED 2026-03-07 00:46:54.774159 | orchestrator | 2026-03-07 00:46:54 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:46:57.871130 | orchestrator | 2026-03-07 00:46:57 | INFO  | Task f5a164ab-7456-4bb3-8d79-e9b9e38ef924 is in state STARTED 2026-03-07 00:46:57.871236 | orchestrator | 2026-03-07 00:46:57 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:46:57.871250 | orchestrator | 2026-03-07 00:46:57 | INFO  | Task efe50778-ce78-4d87-b6f1-cf799ecd61c0 is in state STARTED 2026-03-07 00:46:57.871260 | orchestrator | 2026-03-07 00:46:57 | INFO  | Task dd3f373f-2a56-4f50-bfe7-de239a7389dd is in state STARTED 2026-03-07 00:46:57.871270 | orchestrator | 2026-03-07 00:46:57 | INFO  | Task b47c6774-13c8-4e0c-b83d-d71af9fa89d7 is in state STARTED 2026-03-07 00:46:57.871280 | orchestrator | 2026-03-07 00:46:57 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:46:57.871289 | orchestrator | 2026-03-07 00:46:57 | INFO  | Task 2dec6541-ba43-460e-bb95-aab2e64f6fb9 is in state STARTED 2026-03-07 00:46:57.871299 | orchestrator | 2026-03-07 00:46:57 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:47:00.995070 | orchestrator | 2026-03-07 00:47:00 | INFO  | Task f5a164ab-7456-4bb3-8d79-e9b9e38ef924 is in state STARTED 2026-03-07 00:47:00.997812 | orchestrator | 2026-03-07 00:47:00 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:47:01.000231 | orchestrator | 2026-03-07 00:47:01 | INFO  | Task efe50778-ce78-4d87-b6f1-cf799ecd61c0 is in state STARTED 2026-03-07 00:47:01.004233 | orchestrator | 2026-03-07 00:47:01 | INFO  | Task dd3f373f-2a56-4f50-bfe7-de239a7389dd is in state STARTED 2026-03-07 00:47:01.004952 | orchestrator | 2026-03-07 00:47:01 | INFO  | Task b47c6774-13c8-4e0c-b83d-d71af9fa89d7 is in state STARTED 2026-03-07 00:47:01.007082 | orchestrator | 2026-03-07 00:47:01 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:47:01.008985 | orchestrator | 2026-03-07 00:47:01 | INFO  | Task 2dec6541-ba43-460e-bb95-aab2e64f6fb9 is in state STARTED 2026-03-07 00:47:01.009029 | orchestrator | 2026-03-07 00:47:01 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:47:04.103301 | orchestrator | 2026-03-07 00:47:04 | INFO  | Task f5a164ab-7456-4bb3-8d79-e9b9e38ef924 is in state STARTED 2026-03-07 00:47:04.103404 | orchestrator | 2026-03-07 00:47:04 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:47:04.103417 | orchestrator | 2026-03-07 00:47:04 | INFO  | Task efe50778-ce78-4d87-b6f1-cf799ecd61c0 is in state STARTED 2026-03-07 00:47:04.103449 | orchestrator | 2026-03-07 00:47:04 | INFO  | Task dd3f373f-2a56-4f50-bfe7-de239a7389dd is in state STARTED 2026-03-07 00:47:04.103514 | orchestrator | 2026-03-07 00:47:04 | INFO  | Task b47c6774-13c8-4e0c-b83d-d71af9fa89d7 is in state STARTED 2026-03-07 00:47:04.103525 | orchestrator | 2026-03-07 00:47:04 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:47:04.103533 | orchestrator | 2026-03-07 00:47:04 | INFO  | Task 2dec6541-ba43-460e-bb95-aab2e64f6fb9 is in state STARTED 2026-03-07 00:47:04.103542 | orchestrator | 2026-03-07 00:47:04 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:47:07.245904 | orchestrator | 2026-03-07 00:47:07 | INFO  | Task f5a164ab-7456-4bb3-8d79-e9b9e38ef924 is in state STARTED 2026-03-07 00:47:07.246171 | orchestrator | 2026-03-07 00:47:07 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:47:07.246203 | orchestrator | 2026-03-07 00:47:07 | INFO  | Task efe50778-ce78-4d87-b6f1-cf799ecd61c0 is in state STARTED 2026-03-07 00:47:07.246222 | orchestrator | 2026-03-07 00:47:07 | INFO  | Task dd3f373f-2a56-4f50-bfe7-de239a7389dd is in state STARTED 2026-03-07 00:47:07.246240 | orchestrator | 2026-03-07 00:47:07 | INFO  | Task b47c6774-13c8-4e0c-b83d-d71af9fa89d7 is in state STARTED 2026-03-07 00:47:07.246257 | orchestrator | 2026-03-07 00:47:07 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:47:07.246274 | orchestrator | 2026-03-07 00:47:07 | INFO  | Task 2dec6541-ba43-460e-bb95-aab2e64f6fb9 is in state STARTED 2026-03-07 00:47:07.246292 | orchestrator | 2026-03-07 00:47:07 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:47:10.327014 | orchestrator | 2026-03-07 00:47:10 | INFO  | Task f5a164ab-7456-4bb3-8d79-e9b9e38ef924 is in state STARTED 2026-03-07 00:47:10.329707 | orchestrator | 2026-03-07 00:47:10 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:47:10.333002 | orchestrator | 2026-03-07 00:47:10 | INFO  | Task efe50778-ce78-4d87-b6f1-cf799ecd61c0 is in state STARTED 2026-03-07 00:47:10.357576 | orchestrator | 2026-03-07 00:47:10 | INFO  | Task dd3f373f-2a56-4f50-bfe7-de239a7389dd is in state STARTED 2026-03-07 00:47:10.357658 | orchestrator | 2026-03-07 00:47:10 | INFO  | Task b47c6774-13c8-4e0c-b83d-d71af9fa89d7 is in state STARTED 2026-03-07 00:47:10.453791 | orchestrator | 2026-03-07 00:47:10 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:47:10.453877 | orchestrator | 2026-03-07 00:47:10 | INFO  | Task 2dec6541-ba43-460e-bb95-aab2e64f6fb9 is in state STARTED 2026-03-07 00:47:10.453887 | orchestrator | 2026-03-07 00:47:10 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:47:13.443282 | orchestrator | 2026-03-07 00:47:13.443417 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-03-07 00:47:13.443434 | orchestrator | 2026-03-07 00:47:13.443446 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-03-07 00:47:13.443458 | orchestrator | Saturday 07 March 2026 00:46:53 +0000 (0:00:00.613) 0:00:00.613 ******** 2026-03-07 00:47:13.443470 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:47:13.443482 | orchestrator | changed: [testbed-manager] 2026-03-07 00:47:13.443493 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:47:13.443554 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:47:13.443567 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:47:13.443578 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:47:13.443589 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:47:13.443600 | orchestrator | 2026-03-07 00:47:13.443619 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-03-07 00:47:13.443657 | orchestrator | Saturday 07 March 2026 00:46:58 +0000 (0:00:05.217) 0:00:05.831 ******** 2026-03-07 00:47:13.443669 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-03-07 00:47:13.443681 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-03-07 00:47:13.443692 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-03-07 00:47:13.443702 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-03-07 00:47:13.443713 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-03-07 00:47:13.443724 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-03-07 00:47:13.443735 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-03-07 00:47:13.443746 | orchestrator | 2026-03-07 00:47:13.443756 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-03-07 00:47:13.443768 | orchestrator | Saturday 07 March 2026 00:47:01 +0000 (0:00:02.580) 0:00:08.412 ******** 2026-03-07 00:47:13.443783 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-07 00:46:59.359603', 'end': '2026-03-07 00:46:59.365423', 'delta': '0:00:00.005820', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-07 00:47:13.443800 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-07 00:46:59.591924', 'end': '2026-03-07 00:46:59.601800', 'delta': '0:00:00.009876', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-07 00:47:13.443814 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-07 00:46:59.952998', 'end': '2026-03-07 00:46:59.959235', 'delta': '0:00:00.006237', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-07 00:47:13.443856 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-07 00:46:59.989962', 'end': '2026-03-07 00:46:59.997465', 'delta': '0:00:00.007503', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-07 00:47:13.443886 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-07 00:47:00.544726', 'end': '2026-03-07 00:47:00.555731', 'delta': '0:00:00.011005', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-07 00:47:13.444202 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-07 00:47:00.273308', 'end': '2026-03-07 00:47:00.283612', 'delta': '0:00:00.010304', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-07 00:47:13.444217 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-07 00:46:59.195075', 'end': '2026-03-07 00:46:59.201074', 'delta': '0:00:00.005999', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-07 00:47:13.444231 | orchestrator | 2026-03-07 00:47:13.444242 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-03-07 00:47:13.444253 | orchestrator | Saturday 07 March 2026 00:47:04 +0000 (0:00:02.846) 0:00:11.259 ******** 2026-03-07 00:47:13.444277 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-03-07 00:47:13.444288 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-03-07 00:47:13.444299 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-03-07 00:47:13.444309 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-03-07 00:47:13.444320 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-03-07 00:47:13.444331 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-03-07 00:47:13.444341 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-03-07 00:47:13.444352 | orchestrator | 2026-03-07 00:47:13.444363 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-03-07 00:47:13.444374 | orchestrator | Saturday 07 March 2026 00:47:06 +0000 (0:00:02.683) 0:00:13.942 ******** 2026-03-07 00:47:13.444385 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-03-07 00:47:13.444396 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-03-07 00:47:13.444414 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-03-07 00:47:13.444425 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-03-07 00:47:13.444436 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-03-07 00:47:13.444447 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-03-07 00:47:13.444458 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-03-07 00:47:13.444468 | orchestrator | 2026-03-07 00:47:13.444479 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:47:13.444527 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:47:13.444543 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:47:13.444554 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:47:13.444565 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:47:13.444576 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:47:13.444587 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:47:13.444598 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:47:13.444609 | orchestrator | 2026-03-07 00:47:13.444620 | orchestrator | 2026-03-07 00:47:13.444631 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:47:13.444642 | orchestrator | Saturday 07 March 2026 00:47:10 +0000 (0:00:03.648) 0:00:17.591 ******** 2026-03-07 00:47:13.444653 | orchestrator | =============================================================================== 2026-03-07 00:47:13.444664 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 5.22s 2026-03-07 00:47:13.444675 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.65s 2026-03-07 00:47:13.444686 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.85s 2026-03-07 00:47:13.444696 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.68s 2026-03-07 00:47:13.444707 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.58s 2026-03-07 00:47:13.444718 | orchestrator | 2026-03-07 00:47:13 | INFO  | Task f5a164ab-7456-4bb3-8d79-e9b9e38ef924 is in state SUCCESS 2026-03-07 00:47:13.444729 | orchestrator | 2026-03-07 00:47:13 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:47:13.447703 | orchestrator | 2026-03-07 00:47:13 | INFO  | Task efe50778-ce78-4d87-b6f1-cf799ecd61c0 is in state STARTED 2026-03-07 00:47:13.447737 | orchestrator | 2026-03-07 00:47:13 | INFO  | Task dd3f373f-2a56-4f50-bfe7-de239a7389dd is in state STARTED 2026-03-07 00:47:13.458138 | orchestrator | 2026-03-07 00:47:13 | INFO  | Task b47c6774-13c8-4e0c-b83d-d71af9fa89d7 is in state STARTED 2026-03-07 00:47:13.473280 | orchestrator | 2026-03-07 00:47:13 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:47:13.475415 | orchestrator | 2026-03-07 00:47:13 | INFO  | Task 2dec6541-ba43-460e-bb95-aab2e64f6fb9 is in state STARTED 2026-03-07 00:47:13.477242 | orchestrator | 2026-03-07 00:47:13 | INFO  | Task 0819e3b2-3383-40b8-a688-e6619e4ffa11 is in state STARTED 2026-03-07 00:47:13.477289 | orchestrator | 2026-03-07 00:47:13 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:47:16.595665 | orchestrator | 2026-03-07 00:47:16 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:47:16.595795 | orchestrator | 2026-03-07 00:47:16 | INFO  | Task efe50778-ce78-4d87-b6f1-cf799ecd61c0 is in state STARTED 2026-03-07 00:47:16.595812 | orchestrator | 2026-03-07 00:47:16 | INFO  | Task dd3f373f-2a56-4f50-bfe7-de239a7389dd is in state STARTED 2026-03-07 00:47:16.595824 | orchestrator | 2026-03-07 00:47:16 | INFO  | Task b47c6774-13c8-4e0c-b83d-d71af9fa89d7 is in state STARTED 2026-03-07 00:47:16.595835 | orchestrator | 2026-03-07 00:47:16 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:47:16.595847 | orchestrator | 2026-03-07 00:47:16 | INFO  | Task 2dec6541-ba43-460e-bb95-aab2e64f6fb9 is in state STARTED 2026-03-07 00:47:16.595858 | orchestrator | 2026-03-07 00:47:16 | INFO  | Task 0819e3b2-3383-40b8-a688-e6619e4ffa11 is in state STARTED 2026-03-07 00:47:16.595869 | orchestrator | 2026-03-07 00:47:16 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:47:19.688621 | orchestrator | 2026-03-07 00:47:19 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:47:19.689676 | orchestrator | 2026-03-07 00:47:19 | INFO  | Task efe50778-ce78-4d87-b6f1-cf799ecd61c0 is in state STARTED 2026-03-07 00:47:19.697188 | orchestrator | 2026-03-07 00:47:19 | INFO  | Task dd3f373f-2a56-4f50-bfe7-de239a7389dd is in state STARTED 2026-03-07 00:47:19.697284 | orchestrator | 2026-03-07 00:47:19 | INFO  | Task b47c6774-13c8-4e0c-b83d-d71af9fa89d7 is in state STARTED 2026-03-07 00:47:19.697300 | orchestrator | 2026-03-07 00:47:19 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:47:19.697311 | orchestrator | 2026-03-07 00:47:19 | INFO  | Task 2dec6541-ba43-460e-bb95-aab2e64f6fb9 is in state STARTED 2026-03-07 00:47:19.697322 | orchestrator | 2026-03-07 00:47:19 | INFO  | Task 0819e3b2-3383-40b8-a688-e6619e4ffa11 is in state STARTED 2026-03-07 00:47:19.697334 | orchestrator | 2026-03-07 00:47:19 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:47:22.846224 | orchestrator | 2026-03-07 00:47:22 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:47:22.846338 | orchestrator | 2026-03-07 00:47:22 | INFO  | Task efe50778-ce78-4d87-b6f1-cf799ecd61c0 is in state STARTED 2026-03-07 00:47:22.846354 | orchestrator | 2026-03-07 00:47:22 | INFO  | Task dd3f373f-2a56-4f50-bfe7-de239a7389dd is in state STARTED 2026-03-07 00:47:22.846367 | orchestrator | 2026-03-07 00:47:22 | INFO  | Task b47c6774-13c8-4e0c-b83d-d71af9fa89d7 is in state STARTED 2026-03-07 00:47:22.846378 | orchestrator | 2026-03-07 00:47:22 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:47:22.846389 | orchestrator | 2026-03-07 00:47:22 | INFO  | Task 2dec6541-ba43-460e-bb95-aab2e64f6fb9 is in state STARTED 2026-03-07 00:47:22.846400 | orchestrator | 2026-03-07 00:47:22 | INFO  | Task 0819e3b2-3383-40b8-a688-e6619e4ffa11 is in state STARTED 2026-03-07 00:47:22.846412 | orchestrator | 2026-03-07 00:47:22 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:47:25.811406 | orchestrator | 2026-03-07 00:47:25 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:47:25.813715 | orchestrator | 2026-03-07 00:47:25 | INFO  | Task efe50778-ce78-4d87-b6f1-cf799ecd61c0 is in state STARTED 2026-03-07 00:47:25.814592 | orchestrator | 2026-03-07 00:47:25 | INFO  | Task dd3f373f-2a56-4f50-bfe7-de239a7389dd is in state STARTED 2026-03-07 00:47:25.815685 | orchestrator | 2026-03-07 00:47:25 | INFO  | Task b47c6774-13c8-4e0c-b83d-d71af9fa89d7 is in state STARTED 2026-03-07 00:47:25.818150 | orchestrator | 2026-03-07 00:47:25 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:47:25.819663 | orchestrator | 2026-03-07 00:47:25 | INFO  | Task 2dec6541-ba43-460e-bb95-aab2e64f6fb9 is in state STARTED 2026-03-07 00:47:25.823702 | orchestrator | 2026-03-07 00:47:25 | INFO  | Task 0819e3b2-3383-40b8-a688-e6619e4ffa11 is in state STARTED 2026-03-07 00:47:25.823775 | orchestrator | 2026-03-07 00:47:25 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:47:29.055995 | orchestrator | 2026-03-07 00:47:29 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:47:29.060792 | orchestrator | 2026-03-07 00:47:29 | INFO  | Task efe50778-ce78-4d87-b6f1-cf799ecd61c0 is in state STARTED 2026-03-07 00:47:29.065174 | orchestrator | 2026-03-07 00:47:29 | INFO  | Task dd3f373f-2a56-4f50-bfe7-de239a7389dd is in state STARTED 2026-03-07 00:47:29.067358 | orchestrator | 2026-03-07 00:47:29 | INFO  | Task b47c6774-13c8-4e0c-b83d-d71af9fa89d7 is in state STARTED 2026-03-07 00:47:29.069885 | orchestrator | 2026-03-07 00:47:29 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:47:29.073501 | orchestrator | 2026-03-07 00:47:29 | INFO  | Task 2dec6541-ba43-460e-bb95-aab2e64f6fb9 is in state STARTED 2026-03-07 00:47:29.076094 | orchestrator | 2026-03-07 00:47:29 | INFO  | Task 0819e3b2-3383-40b8-a688-e6619e4ffa11 is in state STARTED 2026-03-07 00:47:29.078590 | orchestrator | 2026-03-07 00:47:29 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:47:32.155885 | orchestrator | 2026-03-07 00:47:32 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:47:32.155967 | orchestrator | 2026-03-07 00:47:32 | INFO  | Task efe50778-ce78-4d87-b6f1-cf799ecd61c0 is in state STARTED 2026-03-07 00:47:32.155974 | orchestrator | 2026-03-07 00:47:32 | INFO  | Task dd3f373f-2a56-4f50-bfe7-de239a7389dd is in state STARTED 2026-03-07 00:47:32.155980 | orchestrator | 2026-03-07 00:47:32 | INFO  | Task b47c6774-13c8-4e0c-b83d-d71af9fa89d7 is in state STARTED 2026-03-07 00:47:32.155985 | orchestrator | 2026-03-07 00:47:32 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:47:32.155990 | orchestrator | 2026-03-07 00:47:32 | INFO  | Task 2dec6541-ba43-460e-bb95-aab2e64f6fb9 is in state STARTED 2026-03-07 00:47:32.165111 | orchestrator | 2026-03-07 00:47:32 | INFO  | Task 0819e3b2-3383-40b8-a688-e6619e4ffa11 is in state STARTED 2026-03-07 00:47:32.165235 | orchestrator | 2026-03-07 00:47:32 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:47:35.215017 | orchestrator | 2026-03-07 00:47:35 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:47:35.215970 | orchestrator | 2026-03-07 00:47:35 | INFO  | Task efe50778-ce78-4d87-b6f1-cf799ecd61c0 is in state STARTED 2026-03-07 00:47:35.216749 | orchestrator | 2026-03-07 00:47:35 | INFO  | Task dd3f373f-2a56-4f50-bfe7-de239a7389dd is in state SUCCESS 2026-03-07 00:47:35.217738 | orchestrator | 2026-03-07 00:47:35 | INFO  | Task b47c6774-13c8-4e0c-b83d-d71af9fa89d7 is in state STARTED 2026-03-07 00:47:35.218968 | orchestrator | 2026-03-07 00:47:35 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:47:35.219009 | orchestrator | 2026-03-07 00:47:35 | INFO  | Task 2dec6541-ba43-460e-bb95-aab2e64f6fb9 is in state STARTED 2026-03-07 00:47:35.220123 | orchestrator | 2026-03-07 00:47:35 | INFO  | Task 0819e3b2-3383-40b8-a688-e6619e4ffa11 is in state STARTED 2026-03-07 00:47:35.220829 | orchestrator | 2026-03-07 00:47:35 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:47:38.343314 | orchestrator | 2026-03-07 00:47:38 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:47:38.343431 | orchestrator | 2026-03-07 00:47:38 | INFO  | Task efe50778-ce78-4d87-b6f1-cf799ecd61c0 is in state STARTED 2026-03-07 00:47:38.343443 | orchestrator | 2026-03-07 00:47:38 | INFO  | Task b47c6774-13c8-4e0c-b83d-d71af9fa89d7 is in state STARTED 2026-03-07 00:47:38.343451 | orchestrator | 2026-03-07 00:47:38 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:47:38.343457 | orchestrator | 2026-03-07 00:47:38 | INFO  | Task 2dec6541-ba43-460e-bb95-aab2e64f6fb9 is in state STARTED 2026-03-07 00:47:38.343463 | orchestrator | 2026-03-07 00:47:38 | INFO  | Task 0819e3b2-3383-40b8-a688-e6619e4ffa11 is in state STARTED 2026-03-07 00:47:38.343470 | orchestrator | 2026-03-07 00:47:38 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:47:41.384145 | orchestrator | 2026-03-07 00:47:41 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:47:41.385421 | orchestrator | 2026-03-07 00:47:41 | INFO  | Task efe50778-ce78-4d87-b6f1-cf799ecd61c0 is in state STARTED 2026-03-07 00:47:41.387537 | orchestrator | 2026-03-07 00:47:41 | INFO  | Task b47c6774-13c8-4e0c-b83d-d71af9fa89d7 is in state STARTED 2026-03-07 00:47:41.389118 | orchestrator | 2026-03-07 00:47:41 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:47:41.390010 | orchestrator | 2026-03-07 00:47:41 | INFO  | Task 2dec6541-ba43-460e-bb95-aab2e64f6fb9 is in state STARTED 2026-03-07 00:47:41.392027 | orchestrator | 2026-03-07 00:47:41 | INFO  | Task 0819e3b2-3383-40b8-a688-e6619e4ffa11 is in state STARTED 2026-03-07 00:47:41.392062 | orchestrator | 2026-03-07 00:47:41 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:47:44.450406 | orchestrator | 2026-03-07 00:47:44 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:47:44.450524 | orchestrator | 2026-03-07 00:47:44 | INFO  | Task efe50778-ce78-4d87-b6f1-cf799ecd61c0 is in state STARTED 2026-03-07 00:47:44.452571 | orchestrator | 2026-03-07 00:47:44 | INFO  | Task b47c6774-13c8-4e0c-b83d-d71af9fa89d7 is in state STARTED 2026-03-07 00:47:44.457469 | orchestrator | 2026-03-07 00:47:44 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:47:44.457569 | orchestrator | 2026-03-07 00:47:44 | INFO  | Task 2dec6541-ba43-460e-bb95-aab2e64f6fb9 is in state STARTED 2026-03-07 00:47:44.459741 | orchestrator | 2026-03-07 00:47:44 | INFO  | Task 0819e3b2-3383-40b8-a688-e6619e4ffa11 is in state STARTED 2026-03-07 00:47:44.459797 | orchestrator | 2026-03-07 00:47:44 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:47:47.535284 | orchestrator | 2026-03-07 00:47:47 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:47:47.535394 | orchestrator | 2026-03-07 00:47:47 | INFO  | Task efe50778-ce78-4d87-b6f1-cf799ecd61c0 is in state STARTED 2026-03-07 00:47:47.536286 | orchestrator | 2026-03-07 00:47:47 | INFO  | Task b47c6774-13c8-4e0c-b83d-d71af9fa89d7 is in state SUCCESS 2026-03-07 00:47:47.538562 | orchestrator | 2026-03-07 00:47:47 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:47:47.541915 | orchestrator | 2026-03-07 00:47:47 | INFO  | Task 2dec6541-ba43-460e-bb95-aab2e64f6fb9 is in state STARTED 2026-03-07 00:47:47.544239 | orchestrator | 2026-03-07 00:47:47 | INFO  | Task 0819e3b2-3383-40b8-a688-e6619e4ffa11 is in state STARTED 2026-03-07 00:47:47.544293 | orchestrator | 2026-03-07 00:47:47 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:47:50.629761 | orchestrator | 2026-03-07 00:47:50 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:47:50.634291 | orchestrator | 2026-03-07 00:47:50 | INFO  | Task efe50778-ce78-4d87-b6f1-cf799ecd61c0 is in state STARTED 2026-03-07 00:47:50.636833 | orchestrator | 2026-03-07 00:47:50 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:47:50.644272 | orchestrator | 2026-03-07 00:47:50 | INFO  | Task 2dec6541-ba43-460e-bb95-aab2e64f6fb9 is in state STARTED 2026-03-07 00:47:50.645617 | orchestrator | 2026-03-07 00:47:50 | INFO  | Task 0819e3b2-3383-40b8-a688-e6619e4ffa11 is in state STARTED 2026-03-07 00:47:50.645647 | orchestrator | 2026-03-07 00:47:50 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:47:53.820800 | orchestrator | 2026-03-07 00:47:53 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:47:53.820932 | orchestrator | 2026-03-07 00:47:53 | INFO  | Task efe50778-ce78-4d87-b6f1-cf799ecd61c0 is in state STARTED 2026-03-07 00:47:53.823245 | orchestrator | 2026-03-07 00:47:53 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:47:53.825297 | orchestrator | 2026-03-07 00:47:53 | INFO  | Task 2dec6541-ba43-460e-bb95-aab2e64f6fb9 is in state STARTED 2026-03-07 00:47:53.827047 | orchestrator | 2026-03-07 00:47:53 | INFO  | Task 0819e3b2-3383-40b8-a688-e6619e4ffa11 is in state STARTED 2026-03-07 00:47:53.827090 | orchestrator | 2026-03-07 00:47:53 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:47:56.938831 | orchestrator | 2026-03-07 00:47:56 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:47:56.938913 | orchestrator | 2026-03-07 00:47:56 | INFO  | Task efe50778-ce78-4d87-b6f1-cf799ecd61c0 is in state STARTED 2026-03-07 00:47:56.938923 | orchestrator | 2026-03-07 00:47:56 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:47:56.938930 | orchestrator | 2026-03-07 00:47:56 | INFO  | Task 2dec6541-ba43-460e-bb95-aab2e64f6fb9 is in state STARTED 2026-03-07 00:47:56.938937 | orchestrator | 2026-03-07 00:47:56 | INFO  | Task 0819e3b2-3383-40b8-a688-e6619e4ffa11 is in state STARTED 2026-03-07 00:47:56.938944 | orchestrator | 2026-03-07 00:47:56 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:48:00.000971 | orchestrator | 2026-03-07 00:47:59 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:48:00.005598 | orchestrator | 2026-03-07 00:48:00 | INFO  | Task efe50778-ce78-4d87-b6f1-cf799ecd61c0 is in state STARTED 2026-03-07 00:48:00.005732 | orchestrator | 2026-03-07 00:48:00 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:48:00.008681 | orchestrator | 2026-03-07 00:48:00 | INFO  | Task 2dec6541-ba43-460e-bb95-aab2e64f6fb9 is in state STARTED 2026-03-07 00:48:00.008860 | orchestrator | 2026-03-07 00:48:00 | INFO  | Task 0819e3b2-3383-40b8-a688-e6619e4ffa11 is in state STARTED 2026-03-07 00:48:00.008878 | orchestrator | 2026-03-07 00:48:00 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:48:03.146201 | orchestrator | 2026-03-07 00:48:03 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:48:03.164173 | orchestrator | 2026-03-07 00:48:03 | INFO  | Task efe50778-ce78-4d87-b6f1-cf799ecd61c0 is in state STARTED 2026-03-07 00:48:03.164367 | orchestrator | 2026-03-07 00:48:03 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:48:03.164386 | orchestrator | 2026-03-07 00:48:03 | INFO  | Task 2dec6541-ba43-460e-bb95-aab2e64f6fb9 is in state STARTED 2026-03-07 00:48:03.164393 | orchestrator | 2026-03-07 00:48:03 | INFO  | Task 0819e3b2-3383-40b8-a688-e6619e4ffa11 is in state STARTED 2026-03-07 00:48:03.164424 | orchestrator | 2026-03-07 00:48:03 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:48:06.220927 | orchestrator | 2026-03-07 00:48:06 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:48:06.223483 | orchestrator | 2026-03-07 00:48:06 | INFO  | Task efe50778-ce78-4d87-b6f1-cf799ecd61c0 is in state STARTED 2026-03-07 00:48:06.230886 | orchestrator | 2026-03-07 00:48:06 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:48:06.231768 | orchestrator | 2026-03-07 00:48:06 | INFO  | Task 2dec6541-ba43-460e-bb95-aab2e64f6fb9 is in state STARTED 2026-03-07 00:48:06.233889 | orchestrator | 2026-03-07 00:48:06 | INFO  | Task 0819e3b2-3383-40b8-a688-e6619e4ffa11 is in state STARTED 2026-03-07 00:48:06.233932 | orchestrator | 2026-03-07 00:48:06 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:48:09.292828 | orchestrator | 2026-03-07 00:48:09 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:48:09.294502 | orchestrator | 2026-03-07 00:48:09 | INFO  | Task efe50778-ce78-4d87-b6f1-cf799ecd61c0 is in state STARTED 2026-03-07 00:48:09.294562 | orchestrator | 2026-03-07 00:48:09 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:48:09.294570 | orchestrator | 2026-03-07 00:48:09 | INFO  | Task 2dec6541-ba43-460e-bb95-aab2e64f6fb9 is in state STARTED 2026-03-07 00:48:09.298276 | orchestrator | 2026-03-07 00:48:09 | INFO  | Task 0819e3b2-3383-40b8-a688-e6619e4ffa11 is in state STARTED 2026-03-07 00:48:09.298342 | orchestrator | 2026-03-07 00:48:09 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:48:12.496038 | orchestrator | 2026-03-07 00:48:12 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:48:12.497306 | orchestrator | 2026-03-07 00:48:12 | INFO  | Task efe50778-ce78-4d87-b6f1-cf799ecd61c0 is in state STARTED 2026-03-07 00:48:12.501039 | orchestrator | 2026-03-07 00:48:12 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:48:12.503217 | orchestrator | 2026-03-07 00:48:12 | INFO  | Task 2dec6541-ba43-460e-bb95-aab2e64f6fb9 is in state STARTED 2026-03-07 00:48:12.504828 | orchestrator | 2026-03-07 00:48:12 | INFO  | Task 0819e3b2-3383-40b8-a688-e6619e4ffa11 is in state STARTED 2026-03-07 00:48:12.504871 | orchestrator | 2026-03-07 00:48:12 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:48:15.560880 | orchestrator | 2026-03-07 00:48:15 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:48:15.564274 | orchestrator | 2026-03-07 00:48:15 | INFO  | Task efe50778-ce78-4d87-b6f1-cf799ecd61c0 is in state STARTED 2026-03-07 00:48:15.566056 | orchestrator | 2026-03-07 00:48:15 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:48:15.568366 | orchestrator | 2026-03-07 00:48:15 | INFO  | Task 2dec6541-ba43-460e-bb95-aab2e64f6fb9 is in state STARTED 2026-03-07 00:48:15.569417 | orchestrator | 2026-03-07 00:48:15 | INFO  | Task 0819e3b2-3383-40b8-a688-e6619e4ffa11 is in state STARTED 2026-03-07 00:48:15.569446 | orchestrator | 2026-03-07 00:48:15 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:48:18.615739 | orchestrator | 2026-03-07 00:48:18 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:48:18.616432 | orchestrator | 2026-03-07 00:48:18 | INFO  | Task efe50778-ce78-4d87-b6f1-cf799ecd61c0 is in state STARTED 2026-03-07 00:48:18.617823 | orchestrator | 2026-03-07 00:48:18 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:48:18.618383 | orchestrator | 2026-03-07 00:48:18 | INFO  | Task 2dec6541-ba43-460e-bb95-aab2e64f6fb9 is in state STARTED 2026-03-07 00:48:18.620630 | orchestrator | 2026-03-07 00:48:18 | INFO  | Task 0819e3b2-3383-40b8-a688-e6619e4ffa11 is in state STARTED 2026-03-07 00:48:18.620684 | orchestrator | 2026-03-07 00:48:18 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:48:21.684429 | orchestrator | 2026-03-07 00:48:21 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:48:21.685413 | orchestrator | 2026-03-07 00:48:21 | INFO  | Task efe50778-ce78-4d87-b6f1-cf799ecd61c0 is in state STARTED 2026-03-07 00:48:21.686697 | orchestrator | 2026-03-07 00:48:21 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:48:21.688136 | orchestrator | 2026-03-07 00:48:21 | INFO  | Task 2dec6541-ba43-460e-bb95-aab2e64f6fb9 is in state STARTED 2026-03-07 00:48:21.693253 | orchestrator | 2026-03-07 00:48:21 | INFO  | Task 0819e3b2-3383-40b8-a688-e6619e4ffa11 is in state STARTED 2026-03-07 00:48:21.693321 | orchestrator | 2026-03-07 00:48:21 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:48:24.768849 | orchestrator | 2026-03-07 00:48:24 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:48:24.772848 | orchestrator | 2026-03-07 00:48:24 | INFO  | Task efe50778-ce78-4d87-b6f1-cf799ecd61c0 is in state STARTED 2026-03-07 00:48:24.781650 | orchestrator | 2026-03-07 00:48:24 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:48:24.783645 | orchestrator | 2026-03-07 00:48:24 | INFO  | Task 2dec6541-ba43-460e-bb95-aab2e64f6fb9 is in state STARTED 2026-03-07 00:48:24.784554 | orchestrator | 2026-03-07 00:48:24 | INFO  | Task 0819e3b2-3383-40b8-a688-e6619e4ffa11 is in state STARTED 2026-03-07 00:48:24.784618 | orchestrator | 2026-03-07 00:48:24 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:48:27.838244 | orchestrator | 2026-03-07 00:48:27 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:48:27.841778 | orchestrator | 2026-03-07 00:48:27 | INFO  | Task efe50778-ce78-4d87-b6f1-cf799ecd61c0 is in state STARTED 2026-03-07 00:48:27.845172 | orchestrator | 2026-03-07 00:48:27 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:48:27.845966 | orchestrator | 2026-03-07 00:48:27 | INFO  | Task 2dec6541-ba43-460e-bb95-aab2e64f6fb9 is in state STARTED 2026-03-07 00:48:27.847708 | orchestrator | 2026-03-07 00:48:27 | INFO  | Task 0819e3b2-3383-40b8-a688-e6619e4ffa11 is in state STARTED 2026-03-07 00:48:27.847766 | orchestrator | 2026-03-07 00:48:27 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:48:30.908224 | orchestrator | 2026-03-07 00:48:30 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:48:30.915557 | orchestrator | 2026-03-07 00:48:30 | INFO  | Task efe50778-ce78-4d87-b6f1-cf799ecd61c0 is in state STARTED 2026-03-07 00:48:30.918970 | orchestrator | 2026-03-07 00:48:30 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:48:30.926490 | orchestrator | 2026-03-07 00:48:30 | INFO  | Task 2dec6541-ba43-460e-bb95-aab2e64f6fb9 is in state STARTED 2026-03-07 00:48:30.926855 | orchestrator | 2026-03-07 00:48:30 | INFO  | Task 0819e3b2-3383-40b8-a688-e6619e4ffa11 is in state SUCCESS 2026-03-07 00:48:30.926874 | orchestrator | 2026-03-07 00:48:30 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:48:30.927213 | orchestrator | 2026-03-07 00:48:30.927226 | orchestrator | 2026-03-07 00:48:30.927231 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-03-07 00:48:30.927255 | orchestrator | 2026-03-07 00:48:30.927261 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-03-07 00:48:30.927266 | orchestrator | Saturday 07 March 2026 00:46:52 +0000 (0:00:00.754) 0:00:00.754 ******** 2026-03-07 00:48:30.927271 | orchestrator | ok: [testbed-manager] => { 2026-03-07 00:48:30.927278 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-03-07 00:48:30.927284 | orchestrator | } 2026-03-07 00:48:30.927289 | orchestrator | 2026-03-07 00:48:30.927294 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-03-07 00:48:30.927298 | orchestrator | Saturday 07 March 2026 00:46:53 +0000 (0:00:00.505) 0:00:01.260 ******** 2026-03-07 00:48:30.927304 | orchestrator | ok: [testbed-manager] 2026-03-07 00:48:30.927310 | orchestrator | 2026-03-07 00:48:30.927314 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-03-07 00:48:30.927319 | orchestrator | Saturday 07 March 2026 00:46:55 +0000 (0:00:02.230) 0:00:03.491 ******** 2026-03-07 00:48:30.927324 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-03-07 00:48:30.927328 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-03-07 00:48:30.927333 | orchestrator | 2026-03-07 00:48:30.927338 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-03-07 00:48:30.927342 | orchestrator | Saturday 07 March 2026 00:46:57 +0000 (0:00:01.572) 0:00:05.063 ******** 2026-03-07 00:48:30.927347 | orchestrator | changed: [testbed-manager] 2026-03-07 00:48:30.927351 | orchestrator | 2026-03-07 00:48:30.927356 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-03-07 00:48:30.927360 | orchestrator | Saturday 07 March 2026 00:46:59 +0000 (0:00:02.038) 0:00:07.101 ******** 2026-03-07 00:48:30.927365 | orchestrator | changed: [testbed-manager] 2026-03-07 00:48:30.927369 | orchestrator | 2026-03-07 00:48:30.927374 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-03-07 00:48:30.927378 | orchestrator | Saturday 07 March 2026 00:47:00 +0000 (0:00:01.223) 0:00:08.325 ******** 2026-03-07 00:48:30.927383 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-03-07 00:48:30.927388 | orchestrator | ok: [testbed-manager] 2026-03-07 00:48:30.927395 | orchestrator | 2026-03-07 00:48:30.927402 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-03-07 00:48:30.927410 | orchestrator | Saturday 07 March 2026 00:47:30 +0000 (0:00:29.858) 0:00:38.183 ******** 2026-03-07 00:48:30.927417 | orchestrator | changed: [testbed-manager] 2026-03-07 00:48:30.927424 | orchestrator | 2026-03-07 00:48:30.927430 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:48:30.927437 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:48:30.927445 | orchestrator | 2026-03-07 00:48:30.927452 | orchestrator | 2026-03-07 00:48:30.927459 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:48:30.927466 | orchestrator | Saturday 07 March 2026 00:47:33 +0000 (0:00:03.359) 0:00:41.542 ******** 2026-03-07 00:48:30.927488 | orchestrator | =============================================================================== 2026-03-07 00:48:30.927497 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 29.86s 2026-03-07 00:48:30.927503 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 3.36s 2026-03-07 00:48:30.927507 | orchestrator | osism.services.homer : Create traefik external network ------------------ 2.23s 2026-03-07 00:48:30.927512 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.04s 2026-03-07 00:48:30.927516 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.57s 2026-03-07 00:48:30.927521 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.22s 2026-03-07 00:48:30.927525 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.51s 2026-03-07 00:48:30.927536 | orchestrator | 2026-03-07 00:48:30.927540 | orchestrator | 2026-03-07 00:48:30.927545 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-03-07 00:48:30.927549 | orchestrator | 2026-03-07 00:48:30.927554 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-03-07 00:48:30.927558 | orchestrator | Saturday 07 March 2026 00:46:53 +0000 (0:00:00.715) 0:00:00.715 ******** 2026-03-07 00:48:30.927563 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-03-07 00:48:30.927569 | orchestrator | 2026-03-07 00:48:30.927573 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-03-07 00:48:30.927578 | orchestrator | Saturday 07 March 2026 00:46:53 +0000 (0:00:00.755) 0:00:01.471 ******** 2026-03-07 00:48:30.927582 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-03-07 00:48:30.927587 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-03-07 00:48:30.927591 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-03-07 00:48:30.927596 | orchestrator | 2026-03-07 00:48:30.927601 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-03-07 00:48:30.927605 | orchestrator | Saturday 07 March 2026 00:46:55 +0000 (0:00:01.968) 0:00:03.439 ******** 2026-03-07 00:48:30.927610 | orchestrator | changed: [testbed-manager] 2026-03-07 00:48:30.927614 | orchestrator | 2026-03-07 00:48:30.927619 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-03-07 00:48:30.927623 | orchestrator | Saturday 07 March 2026 00:46:57 +0000 (0:00:01.928) 0:00:05.368 ******** 2026-03-07 00:48:30.927636 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-03-07 00:48:30.927641 | orchestrator | ok: [testbed-manager] 2026-03-07 00:48:30.927645 | orchestrator | 2026-03-07 00:48:30.927650 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-03-07 00:48:30.927655 | orchestrator | Saturday 07 March 2026 00:47:35 +0000 (0:00:37.319) 0:00:42.687 ******** 2026-03-07 00:48:30.927659 | orchestrator | changed: [testbed-manager] 2026-03-07 00:48:30.927664 | orchestrator | 2026-03-07 00:48:30.927668 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-03-07 00:48:30.927673 | orchestrator | Saturday 07 March 2026 00:47:38 +0000 (0:00:03.800) 0:00:46.488 ******** 2026-03-07 00:48:30.927677 | orchestrator | ok: [testbed-manager] 2026-03-07 00:48:30.927682 | orchestrator | 2026-03-07 00:48:30.927686 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-03-07 00:48:30.927691 | orchestrator | Saturday 07 March 2026 00:47:39 +0000 (0:00:00.781) 0:00:47.269 ******** 2026-03-07 00:48:30.927695 | orchestrator | changed: [testbed-manager] 2026-03-07 00:48:30.927700 | orchestrator | 2026-03-07 00:48:30.927704 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-03-07 00:48:30.927709 | orchestrator | Saturday 07 March 2026 00:47:42 +0000 (0:00:02.587) 0:00:49.857 ******** 2026-03-07 00:48:30.927713 | orchestrator | changed: [testbed-manager] 2026-03-07 00:48:30.927718 | orchestrator | 2026-03-07 00:48:30.927722 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-03-07 00:48:30.927727 | orchestrator | Saturday 07 March 2026 00:47:43 +0000 (0:00:01.043) 0:00:50.900 ******** 2026-03-07 00:48:30.927731 | orchestrator | changed: [testbed-manager] 2026-03-07 00:48:30.927736 | orchestrator | 2026-03-07 00:48:30.927740 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-03-07 00:48:30.927745 | orchestrator | Saturday 07 March 2026 00:47:44 +0000 (0:00:01.526) 0:00:52.427 ******** 2026-03-07 00:48:30.927749 | orchestrator | ok: [testbed-manager] 2026-03-07 00:48:30.927754 | orchestrator | 2026-03-07 00:48:30.927759 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:48:30.927763 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:48:30.927772 | orchestrator | 2026-03-07 00:48:30.927776 | orchestrator | 2026-03-07 00:48:30.927782 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:48:30.927789 | orchestrator | Saturday 07 March 2026 00:47:45 +0000 (0:00:00.939) 0:00:53.367 ******** 2026-03-07 00:48:30.927796 | orchestrator | =============================================================================== 2026-03-07 00:48:30.927801 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 37.32s 2026-03-07 00:48:30.927805 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 3.80s 2026-03-07 00:48:30.927809 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.59s 2026-03-07 00:48:30.927814 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.97s 2026-03-07 00:48:30.927819 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.93s 2026-03-07 00:48:30.927839 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.53s 2026-03-07 00:48:30.927848 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.04s 2026-03-07 00:48:30.927852 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.94s 2026-03-07 00:48:30.927857 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.78s 2026-03-07 00:48:30.927861 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.76s 2026-03-07 00:48:30.927866 | orchestrator | 2026-03-07 00:48:30.927871 | orchestrator | 2026-03-07 00:48:30.927875 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-03-07 00:48:30.927880 | orchestrator | 2026-03-07 00:48:30.927884 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-03-07 00:48:30.927889 | orchestrator | Saturday 07 March 2026 00:47:16 +0000 (0:00:00.269) 0:00:00.269 ******** 2026-03-07 00:48:30.927893 | orchestrator | ok: [testbed-manager] 2026-03-07 00:48:30.927898 | orchestrator | 2026-03-07 00:48:30.927902 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-03-07 00:48:30.927907 | orchestrator | Saturday 07 March 2026 00:47:17 +0000 (0:00:01.111) 0:00:01.380 ******** 2026-03-07 00:48:30.927911 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-03-07 00:48:30.927916 | orchestrator | 2026-03-07 00:48:30.927921 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-03-07 00:48:30.927925 | orchestrator | Saturday 07 March 2026 00:47:18 +0000 (0:00:00.746) 0:00:02.127 ******** 2026-03-07 00:48:30.927930 | orchestrator | changed: [testbed-manager] 2026-03-07 00:48:30.927934 | orchestrator | 2026-03-07 00:48:30.927939 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-03-07 00:48:30.927943 | orchestrator | Saturday 07 March 2026 00:47:19 +0000 (0:00:01.381) 0:00:03.509 ******** 2026-03-07 00:48:30.927948 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-03-07 00:48:30.927952 | orchestrator | ok: [testbed-manager] 2026-03-07 00:48:30.927957 | orchestrator | 2026-03-07 00:48:30.927961 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-03-07 00:48:30.927966 | orchestrator | Saturday 07 March 2026 00:48:22 +0000 (0:01:02.748) 0:01:06.257 ******** 2026-03-07 00:48:30.927970 | orchestrator | changed: [testbed-manager] 2026-03-07 00:48:30.927975 | orchestrator | 2026-03-07 00:48:30.927979 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:48:30.927984 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:48:30.927988 | orchestrator | 2026-03-07 00:48:30.927993 | orchestrator | 2026-03-07 00:48:30.927998 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:48:30.928005 | orchestrator | Saturday 07 March 2026 00:48:26 +0000 (0:00:04.703) 0:01:10.961 ******** 2026-03-07 00:48:30.928014 | orchestrator | =============================================================================== 2026-03-07 00:48:30.928019 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 62.75s 2026-03-07 00:48:30.928023 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 4.70s 2026-03-07 00:48:30.928028 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.38s 2026-03-07 00:48:30.928032 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.11s 2026-03-07 00:48:30.928037 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.75s 2026-03-07 00:48:34.003748 | orchestrator | 2026-03-07 00:48:34 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:48:34.005644 | orchestrator | 2026-03-07 00:48:34 | INFO  | Task efe50778-ce78-4d87-b6f1-cf799ecd61c0 is in state STARTED 2026-03-07 00:48:34.006788 | orchestrator | 2026-03-07 00:48:34 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:48:34.008303 | orchestrator | 2026-03-07 00:48:34 | INFO  | Task 2dec6541-ba43-460e-bb95-aab2e64f6fb9 is in state STARTED 2026-03-07 00:48:34.008488 | orchestrator | 2026-03-07 00:48:34 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:48:37.056958 | orchestrator | 2026-03-07 00:48:37 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:48:37.057387 | orchestrator | 2026-03-07 00:48:37 | INFO  | Task efe50778-ce78-4d87-b6f1-cf799ecd61c0 is in state STARTED 2026-03-07 00:48:37.058815 | orchestrator | 2026-03-07 00:48:37 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:48:37.060384 | orchestrator | 2026-03-07 00:48:37 | INFO  | Task 2dec6541-ba43-460e-bb95-aab2e64f6fb9 is in state STARTED 2026-03-07 00:48:37.060470 | orchestrator | 2026-03-07 00:48:37 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:48:40.114469 | orchestrator | 2026-03-07 00:48:40 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:48:40.118287 | orchestrator | 2026-03-07 00:48:40 | INFO  | Task efe50778-ce78-4d87-b6f1-cf799ecd61c0 is in state STARTED 2026-03-07 00:48:40.120839 | orchestrator | 2026-03-07 00:48:40 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:48:40.122704 | orchestrator | 2026-03-07 00:48:40 | INFO  | Task 2dec6541-ba43-460e-bb95-aab2e64f6fb9 is in state STARTED 2026-03-07 00:48:40.122765 | orchestrator | 2026-03-07 00:48:40 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:48:43.186960 | orchestrator | 2026-03-07 00:48:43 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:48:43.187707 | orchestrator | 2026-03-07 00:48:43 | INFO  | Task efe50778-ce78-4d87-b6f1-cf799ecd61c0 is in state SUCCESS 2026-03-07 00:48:43.188504 | orchestrator | 2026-03-07 00:48:43.188521 | orchestrator | 2026-03-07 00:48:43.188525 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-07 00:48:43.188530 | orchestrator | 2026-03-07 00:48:43.188535 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-07 00:48:43.188540 | orchestrator | Saturday 07 March 2026 00:46:54 +0000 (0:00:00.227) 0:00:00.227 ******** 2026-03-07 00:48:43.188546 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-03-07 00:48:43.188551 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-03-07 00:48:43.188556 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-03-07 00:48:43.188561 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-03-07 00:48:43.188566 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-03-07 00:48:43.188570 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-03-07 00:48:43.188586 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-03-07 00:48:43.188591 | orchestrator | 2026-03-07 00:48:43.188596 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-03-07 00:48:43.188601 | orchestrator | 2026-03-07 00:48:43.188607 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-03-07 00:48:43.188612 | orchestrator | Saturday 07 March 2026 00:46:55 +0000 (0:00:01.093) 0:00:01.320 ******** 2026-03-07 00:48:43.188624 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:48:43.188633 | orchestrator | 2026-03-07 00:48:43.188638 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-03-07 00:48:43.188644 | orchestrator | Saturday 07 March 2026 00:46:57 +0000 (0:00:01.781) 0:00:03.101 ******** 2026-03-07 00:48:43.188648 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:48:43.188654 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:48:43.188659 | orchestrator | ok: [testbed-manager] 2026-03-07 00:48:43.188665 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:48:43.188670 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:48:43.188675 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:48:43.188680 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:48:43.188684 | orchestrator | 2026-03-07 00:48:43.188690 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-03-07 00:48:43.188695 | orchestrator | Saturday 07 March 2026 00:46:59 +0000 (0:00:01.945) 0:00:05.047 ******** 2026-03-07 00:48:43.188700 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:48:43.188705 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:48:43.188710 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:48:43.188716 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:48:43.188720 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:48:43.188726 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:48:43.188731 | orchestrator | ok: [testbed-manager] 2026-03-07 00:48:43.188736 | orchestrator | 2026-03-07 00:48:43.188741 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-03-07 00:48:43.188746 | orchestrator | Saturday 07 March 2026 00:47:03 +0000 (0:00:03.890) 0:00:08.937 ******** 2026-03-07 00:48:43.188751 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:48:43.188756 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:48:43.188761 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:48:43.188766 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:48:43.188771 | orchestrator | changed: [testbed-manager] 2026-03-07 00:48:43.188776 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:48:43.188781 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:48:43.188786 | orchestrator | 2026-03-07 00:48:43.188791 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-03-07 00:48:43.188796 | orchestrator | Saturday 07 March 2026 00:47:07 +0000 (0:00:03.781) 0:00:12.721 ******** 2026-03-07 00:48:43.188801 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:48:43.188806 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:48:43.188811 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:48:43.188816 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:48:43.188821 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:48:43.188826 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:48:43.188831 | orchestrator | changed: [testbed-manager] 2026-03-07 00:48:43.188836 | orchestrator | 2026-03-07 00:48:43.188841 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-03-07 00:48:43.188846 | orchestrator | Saturday 07 March 2026 00:47:23 +0000 (0:00:16.066) 0:00:28.789 ******** 2026-03-07 00:48:43.188851 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:48:43.188856 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:48:43.188861 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:48:43.188895 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:48:43.188905 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:48:43.188910 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:48:43.188916 | orchestrator | changed: [testbed-manager] 2026-03-07 00:48:43.188921 | orchestrator | 2026-03-07 00:48:43.188926 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-03-07 00:48:43.188931 | orchestrator | Saturday 07 March 2026 00:48:06 +0000 (0:00:43.629) 0:01:12.419 ******** 2026-03-07 00:48:43.188937 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:48:43.188943 | orchestrator | 2026-03-07 00:48:43.188948 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-03-07 00:48:43.188953 | orchestrator | Saturday 07 March 2026 00:48:08 +0000 (0:00:01.481) 0:01:13.900 ******** 2026-03-07 00:48:43.188961 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-03-07 00:48:43.188967 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-03-07 00:48:43.188972 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-03-07 00:48:43.188977 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-03-07 00:48:43.188989 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-03-07 00:48:43.188994 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-03-07 00:48:43.188999 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-03-07 00:48:43.189004 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-03-07 00:48:43.189009 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-03-07 00:48:43.189014 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-03-07 00:48:43.189019 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-03-07 00:48:43.189024 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-03-07 00:48:43.189029 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-03-07 00:48:43.189034 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-03-07 00:48:43.189039 | orchestrator | 2026-03-07 00:48:43.189044 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-03-07 00:48:43.189050 | orchestrator | Saturday 07 March 2026 00:48:14 +0000 (0:00:05.681) 0:01:19.581 ******** 2026-03-07 00:48:43.189055 | orchestrator | ok: [testbed-manager] 2026-03-07 00:48:43.189060 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:48:43.189065 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:48:43.189070 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:48:43.189075 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:48:43.189080 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:48:43.189085 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:48:43.189090 | orchestrator | 2026-03-07 00:48:43.189095 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-03-07 00:48:43.189100 | orchestrator | Saturday 07 March 2026 00:48:15 +0000 (0:00:01.643) 0:01:21.225 ******** 2026-03-07 00:48:43.189105 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:48:43.189108 | orchestrator | changed: [testbed-manager] 2026-03-07 00:48:43.189111 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:48:43.189114 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:48:43.189117 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:48:43.189120 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:48:43.189123 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:48:43.189126 | orchestrator | 2026-03-07 00:48:43.189129 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-03-07 00:48:43.189133 | orchestrator | Saturday 07 March 2026 00:48:17 +0000 (0:00:02.101) 0:01:23.326 ******** 2026-03-07 00:48:43.189136 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:48:43.189139 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:48:43.189142 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:48:43.189145 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:48:43.189151 | orchestrator | ok: [testbed-manager] 2026-03-07 00:48:43.189154 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:48:43.189157 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:48:43.189160 | orchestrator | 2026-03-07 00:48:43.189163 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-03-07 00:48:43.189167 | orchestrator | Saturday 07 March 2026 00:48:19 +0000 (0:00:02.066) 0:01:25.393 ******** 2026-03-07 00:48:43.189170 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:48:43.189173 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:48:43.189176 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:48:43.189179 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:48:43.189182 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:48:43.189185 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:48:43.189188 | orchestrator | ok: [testbed-manager] 2026-03-07 00:48:43.189191 | orchestrator | 2026-03-07 00:48:43.189194 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-03-07 00:48:43.189197 | orchestrator | Saturday 07 March 2026 00:48:23 +0000 (0:00:03.418) 0:01:28.811 ******** 2026-03-07 00:48:43.189200 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-03-07 00:48:43.189204 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:48:43.189208 | orchestrator | 2026-03-07 00:48:43.189211 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-03-07 00:48:43.189214 | orchestrator | Saturday 07 March 2026 00:48:25 +0000 (0:00:01.836) 0:01:30.648 ******** 2026-03-07 00:48:43.189217 | orchestrator | changed: [testbed-manager] 2026-03-07 00:48:43.189220 | orchestrator | 2026-03-07 00:48:43.189223 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-03-07 00:48:43.189226 | orchestrator | Saturday 07 March 2026 00:48:28 +0000 (0:00:03.009) 0:01:33.658 ******** 2026-03-07 00:48:43.189229 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:48:43.189234 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:48:43.189239 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:48:43.189244 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:48:43.189249 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:48:43.189253 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:48:43.189257 | orchestrator | changed: [testbed-manager] 2026-03-07 00:48:43.189260 | orchestrator | 2026-03-07 00:48:43.189263 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:48:43.189266 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:48:43.189270 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:48:43.189273 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:48:43.189278 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:48:43.189283 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:48:43.189287 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:48:43.189290 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:48:43.189293 | orchestrator | 2026-03-07 00:48:43.189296 | orchestrator | 2026-03-07 00:48:43.189299 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:48:43.189304 | orchestrator | Saturday 07 March 2026 00:48:39 +0000 (0:00:11.771) 0:01:45.429 ******** 2026-03-07 00:48:43.189308 | orchestrator | =============================================================================== 2026-03-07 00:48:43.189311 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 43.63s 2026-03-07 00:48:43.189314 | orchestrator | osism.services.netdata : Add repository -------------------------------- 16.07s 2026-03-07 00:48:43.189317 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.77s 2026-03-07 00:48:43.189320 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 5.68s 2026-03-07 00:48:43.189323 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.89s 2026-03-07 00:48:43.189326 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 3.78s 2026-03-07 00:48:43.189329 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 3.42s 2026-03-07 00:48:43.189332 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 3.01s 2026-03-07 00:48:43.189335 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 2.10s 2026-03-07 00:48:43.189338 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 2.07s 2026-03-07 00:48:43.189341 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.95s 2026-03-07 00:48:43.189344 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.84s 2026-03-07 00:48:43.189347 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.78s 2026-03-07 00:48:43.189350 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.64s 2026-03-07 00:48:43.189354 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.48s 2026-03-07 00:48:43.189357 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.09s 2026-03-07 00:48:43.191214 | orchestrator | 2026-03-07 00:48:43 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:48:43.196498 | orchestrator | 2026-03-07 00:48:43 | INFO  | Task 2dec6541-ba43-460e-bb95-aab2e64f6fb9 is in state STARTED 2026-03-07 00:48:43.196544 | orchestrator | 2026-03-07 00:48:43 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:48:46.256110 | orchestrator | 2026-03-07 00:48:46 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:48:46.256194 | orchestrator | 2026-03-07 00:48:46 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:48:46.258846 | orchestrator | 2026-03-07 00:48:46 | INFO  | Task 2dec6541-ba43-460e-bb95-aab2e64f6fb9 is in state STARTED 2026-03-07 00:48:46.258925 | orchestrator | 2026-03-07 00:48:46 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:48:49.341322 | orchestrator | 2026-03-07 00:48:49 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:48:49.346545 | orchestrator | 2026-03-07 00:48:49 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:48:49.349505 | orchestrator | 2026-03-07 00:48:49 | INFO  | Task 2dec6541-ba43-460e-bb95-aab2e64f6fb9 is in state STARTED 2026-03-07 00:48:49.349571 | orchestrator | 2026-03-07 00:48:49 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:48:52.423096 | orchestrator | 2026-03-07 00:48:52 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:48:52.425049 | orchestrator | 2026-03-07 00:48:52 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:48:52.426410 | orchestrator | 2026-03-07 00:48:52 | INFO  | Task 2dec6541-ba43-460e-bb95-aab2e64f6fb9 is in state STARTED 2026-03-07 00:48:52.426449 | orchestrator | 2026-03-07 00:48:52 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:48:55.471984 | orchestrator | 2026-03-07 00:48:55 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:48:55.473961 | orchestrator | 2026-03-07 00:48:55 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:48:55.474458 | orchestrator | 2026-03-07 00:48:55 | INFO  | Task 2dec6541-ba43-460e-bb95-aab2e64f6fb9 is in state STARTED 2026-03-07 00:48:55.474678 | orchestrator | 2026-03-07 00:48:55 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:48:58.511002 | orchestrator | 2026-03-07 00:48:58 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:48:58.513207 | orchestrator | 2026-03-07 00:48:58 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:48:58.516315 | orchestrator | 2026-03-07 00:48:58 | INFO  | Task 2dec6541-ba43-460e-bb95-aab2e64f6fb9 is in state STARTED 2026-03-07 00:48:58.516349 | orchestrator | 2026-03-07 00:48:58 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:49:01.574628 | orchestrator | 2026-03-07 00:49:01 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:49:01.576316 | orchestrator | 2026-03-07 00:49:01 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:49:01.577092 | orchestrator | 2026-03-07 00:49:01 | INFO  | Task 2dec6541-ba43-460e-bb95-aab2e64f6fb9 is in state STARTED 2026-03-07 00:49:01.577125 | orchestrator | 2026-03-07 00:49:01 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:49:04.614753 | orchestrator | 2026-03-07 00:49:04 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:49:04.614810 | orchestrator | 2026-03-07 00:49:04 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:49:04.617374 | orchestrator | 2026-03-07 00:49:04 | INFO  | Task 2dec6541-ba43-460e-bb95-aab2e64f6fb9 is in state STARTED 2026-03-07 00:49:04.617452 | orchestrator | 2026-03-07 00:49:04 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:49:07.669833 | orchestrator | 2026-03-07 00:49:07 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:49:07.670393 | orchestrator | 2026-03-07 00:49:07 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:49:07.671845 | orchestrator | 2026-03-07 00:49:07 | INFO  | Task 2dec6541-ba43-460e-bb95-aab2e64f6fb9 is in state STARTED 2026-03-07 00:49:07.672039 | orchestrator | 2026-03-07 00:49:07 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:49:10.710063 | orchestrator | 2026-03-07 00:49:10 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:49:10.713952 | orchestrator | 2026-03-07 00:49:10 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:49:10.716041 | orchestrator | 2026-03-07 00:49:10 | INFO  | Task 2dec6541-ba43-460e-bb95-aab2e64f6fb9 is in state STARTED 2026-03-07 00:49:10.716107 | orchestrator | 2026-03-07 00:49:10 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:49:13.776211 | orchestrator | 2026-03-07 00:49:13 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:49:13.778922 | orchestrator | 2026-03-07 00:49:13 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:49:13.782498 | orchestrator | 2026-03-07 00:49:13 | INFO  | Task 2dec6541-ba43-460e-bb95-aab2e64f6fb9 is in state STARTED 2026-03-07 00:49:13.782575 | orchestrator | 2026-03-07 00:49:13 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:49:16.844308 | orchestrator | 2026-03-07 00:49:16 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:49:16.844439 | orchestrator | 2026-03-07 00:49:16 | INFO  | Task d9a47cf9-8182-4ed0-b1d3-0bc40536f5f2 is in state STARTED 2026-03-07 00:49:16.845018 | orchestrator | 2026-03-07 00:49:16 | INFO  | Task b3e81d0a-f288-46d9-88a8-bc7df06a566e is in state STARTED 2026-03-07 00:49:16.845586 | orchestrator | 2026-03-07 00:49:16 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:49:16.846394 | orchestrator | 2026-03-07 00:49:16 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:49:16.861414 | orchestrator | 2026-03-07 00:49:16 | INFO  | Task 2dec6541-ba43-460e-bb95-aab2e64f6fb9 is in state SUCCESS 2026-03-07 00:49:16.863274 | orchestrator | 2026-03-07 00:49:16.863336 | orchestrator | 2026-03-07 00:49:16.863347 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-03-07 00:49:16.863355 | orchestrator | 2026-03-07 00:49:16.863362 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-07 00:49:16.863369 | orchestrator | Saturday 07 March 2026 00:46:44 +0000 (0:00:00.261) 0:00:00.261 ******** 2026-03-07 00:49:16.863377 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:49:16.863384 | orchestrator | 2026-03-07 00:49:16.863394 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-03-07 00:49:16.863398 | orchestrator | Saturday 07 March 2026 00:46:45 +0000 (0:00:01.293) 0:00:01.554 ******** 2026-03-07 00:49:16.863402 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-07 00:49:16.863406 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-07 00:49:16.863410 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-07 00:49:16.863414 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-07 00:49:16.863418 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-07 00:49:16.863422 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-07 00:49:16.863425 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-07 00:49:16.863429 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-07 00:49:16.863433 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-07 00:49:16.863436 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-07 00:49:16.863442 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-07 00:49:16.863446 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-07 00:49:16.863449 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-07 00:49:16.863453 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-07 00:49:16.863457 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-07 00:49:16.863461 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-07 00:49:16.863465 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-07 00:49:16.863468 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-07 00:49:16.863472 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-07 00:49:16.863476 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-07 00:49:16.863493 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-07 00:49:16.863497 | orchestrator | 2026-03-07 00:49:16.863501 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-07 00:49:16.863505 | orchestrator | Saturday 07 March 2026 00:46:50 +0000 (0:00:04.513) 0:00:06.068 ******** 2026-03-07 00:49:16.863508 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:49:16.863513 | orchestrator | 2026-03-07 00:49:16.863518 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-03-07 00:49:16.863521 | orchestrator | Saturday 07 March 2026 00:46:51 +0000 (0:00:01.448) 0:00:07.516 ******** 2026-03-07 00:49:16.863529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-07 00:49:16.863536 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-07 00:49:16.863558 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-07 00:49:16.863562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-07 00:49:16.863566 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-07 00:49:16.863571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:16.863578 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:16.863582 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:16.863586 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-07 00:49:16.863596 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:16.863602 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-07 00:49:16.863613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:16.863626 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:16.863640 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:16.863647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:16.863654 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:16.863660 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:16.863675 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:16.863681 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:16.864301 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:16.864315 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:16.864332 | orchestrator | 2026-03-07 00:49:16.864341 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-03-07 00:49:16.864348 | orchestrator | Saturday 07 March 2026 00:46:57 +0000 (0:00:05.808) 0:00:13.324 ******** 2026-03-07 00:49:16.864357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-07 00:49:16.864365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:16.864370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:16.864375 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:49:16.864388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-07 00:49:16.864397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:16.864401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:16.864405 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-07 00:49:16.864412 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:16.864416 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:16.864420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-07 00:49:16.864424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:16.864436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:16.864440 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:49:16.864444 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:49:16.864449 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-07 00:49:16.864454 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:16.864461 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:16.864465 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:49:16.864469 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:49:16.864472 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-07 00:49:16.864477 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:16.864481 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:16.864484 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:49:16.864495 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-07 00:49:16.864500 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:16.864508 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:16.864516 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:49:16.864521 | orchestrator | 2026-03-07 00:49:16.864527 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-03-07 00:49:16.864533 | orchestrator | Saturday 07 March 2026 00:47:00 +0000 (0:00:02.419) 0:00:15.744 ******** 2026-03-07 00:49:16.864540 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-07 00:49:16.864551 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:16.864557 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:16.864563 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:49:16.864570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-07 00:49:16.864729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:16.864749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:16.864759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-07 00:49:16.864764 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:49:16.864768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:16.864772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:16.864776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-07 00:49:16.864780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:16.864784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:16.864788 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:49:16.864798 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-07 00:49:16.864807 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:16.864811 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:16.864815 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:49:16.864819 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:49:16.864822 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-07 00:49:16.864826 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:16.864830 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:16.864834 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:49:16.864838 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-07 00:49:16.864845 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:16.864854 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:16.864858 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:49:16.864862 | orchestrator | 2026-03-07 00:49:16.864867 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-03-07 00:49:16.864871 | orchestrator | Saturday 07 March 2026 00:47:03 +0000 (0:00:03.495) 0:00:19.239 ******** 2026-03-07 00:49:16.864875 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:49:16.864879 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:49:16.864882 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:49:16.864886 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:49:16.864890 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:49:16.864894 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:49:16.864897 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:49:16.864902 | orchestrator | 2026-03-07 00:49:16.864908 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-03-07 00:49:16.864914 | orchestrator | Saturday 07 March 2026 00:47:05 +0000 (0:00:02.283) 0:00:21.522 ******** 2026-03-07 00:49:16.864920 | orchestrator | skipping: [testbed-manager] 2026-03-07 00:49:16.864929 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:49:16.864936 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:49:16.864941 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:49:16.864947 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:49:16.864953 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:49:16.864959 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:49:16.864964 | orchestrator | 2026-03-07 00:49:16.864969 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-03-07 00:49:16.864994 | orchestrator | Saturday 07 March 2026 00:47:06 +0000 (0:00:00.884) 0:00:22.407 ******** 2026-03-07 00:49:16.865001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-07 00:49:16.865007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-07 00:49:16.865013 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:16.865025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-07 00:49:16.865038 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-07 00:49:16.865050 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-07 00:49:16.865057 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-07 00:49:16.865063 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:16.865070 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-07 00:49:16.865076 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:16.865086 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:16.865096 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:16.865106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:16.865113 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:16.865119 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:16.865125 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:16.865132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:16.865143 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:16.865149 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:16.865160 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:16.865169 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:16.865175 | orchestrator | 2026-03-07 00:49:16.865182 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-03-07 00:49:16.865188 | orchestrator | Saturday 07 March 2026 00:47:15 +0000 (0:00:08.706) 0:00:31.113 ******** 2026-03-07 00:49:16.865195 | orchestrator | [WARNING]: Skipped 2026-03-07 00:49:16.865202 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-03-07 00:49:16.865209 | orchestrator | to this access issue: 2026-03-07 00:49:16.865215 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-03-07 00:49:16.865221 | orchestrator | directory 2026-03-07 00:49:16.865227 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-07 00:49:16.865232 | orchestrator | 2026-03-07 00:49:16.865238 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-03-07 00:49:16.865244 | orchestrator | Saturday 07 March 2026 00:47:17 +0000 (0:00:01.732) 0:00:32.846 ******** 2026-03-07 00:49:16.865250 | orchestrator | [WARNING]: Skipped 2026-03-07 00:49:16.865256 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-03-07 00:49:16.865261 | orchestrator | to this access issue: 2026-03-07 00:49:16.865268 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-03-07 00:49:16.865274 | orchestrator | directory 2026-03-07 00:49:16.865382 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-07 00:49:16.865389 | orchestrator | 2026-03-07 00:49:16.865395 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-03-07 00:49:16.865401 | orchestrator | Saturday 07 March 2026 00:47:18 +0000 (0:00:01.387) 0:00:34.233 ******** 2026-03-07 00:49:16.865406 | orchestrator | [WARNING]: Skipped 2026-03-07 00:49:16.865413 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-03-07 00:49:16.865417 | orchestrator | to this access issue: 2026-03-07 00:49:16.865421 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-03-07 00:49:16.865431 | orchestrator | directory 2026-03-07 00:49:16.865435 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-07 00:49:16.865438 | orchestrator | 2026-03-07 00:49:16.865442 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-03-07 00:49:16.865446 | orchestrator | Saturday 07 March 2026 00:47:19 +0000 (0:00:00.868) 0:00:35.101 ******** 2026-03-07 00:49:16.865450 | orchestrator | [WARNING]: Skipped 2026-03-07 00:49:16.865453 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-03-07 00:49:16.865457 | orchestrator | to this access issue: 2026-03-07 00:49:16.865461 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-03-07 00:49:16.865465 | orchestrator | directory 2026-03-07 00:49:16.865468 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-07 00:49:16.865472 | orchestrator | 2026-03-07 00:49:16.865476 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-03-07 00:49:16.865480 | orchestrator | Saturday 07 March 2026 00:47:20 +0000 (0:00:00.906) 0:00:36.008 ******** 2026-03-07 00:49:16.865483 | orchestrator | changed: [testbed-manager] 2026-03-07 00:49:16.865487 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:49:16.865491 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:49:16.865495 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:49:16.865499 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:49:16.865502 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:49:16.865506 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:49:16.865510 | orchestrator | 2026-03-07 00:49:16.865513 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-03-07 00:49:16.865517 | orchestrator | Saturday 07 March 2026 00:47:25 +0000 (0:00:05.331) 0:00:41.340 ******** 2026-03-07 00:49:16.865521 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-07 00:49:16.865525 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-07 00:49:16.865529 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-07 00:49:16.865533 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-07 00:49:16.865536 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-07 00:49:16.865540 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-07 00:49:16.865544 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-07 00:49:16.865547 | orchestrator | 2026-03-07 00:49:16.865551 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-03-07 00:49:16.865555 | orchestrator | Saturday 07 March 2026 00:47:32 +0000 (0:00:06.698) 0:00:48.039 ******** 2026-03-07 00:49:16.865559 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:49:16.865563 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:49:16.865566 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:49:16.865570 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:49:16.865579 | orchestrator | changed: [testbed-manager] 2026-03-07 00:49:16.865583 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:49:16.865586 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:49:16.865590 | orchestrator | 2026-03-07 00:49:16.865594 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-03-07 00:49:16.865598 | orchestrator | Saturday 07 March 2026 00:47:36 +0000 (0:00:04.067) 0:00:52.106 ******** 2026-03-07 00:49:16.865606 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-07 00:49:16.865613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:16.865617 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-07 00:49:16.865621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:16.865626 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-07 00:49:16.865630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:16.865641 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-07 00:49:16.865646 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:16.865653 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:16.865664 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:16.865668 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-07 00:49:16.865674 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:16.865678 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-07 00:49:16.865682 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:16.865689 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:16.865698 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:16.865702 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-07 00:49:16.865706 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:49:16.865710 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:16.865714 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:16.865718 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:16.865722 | orchestrator | 2026-03-07 00:49:16.865726 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-03-07 00:49:16.865730 | orchestrator | Saturday 07 March 2026 00:47:39 +0000 (0:00:02.763) 0:00:54.869 ******** 2026-03-07 00:49:16.865734 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-07 00:49:16.865738 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-07 00:49:16.865741 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-07 00:49:16.865745 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-07 00:49:16.865749 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-07 00:49:16.865755 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-07 00:49:16.865759 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-07 00:49:16.865763 | orchestrator | 2026-03-07 00:49:16.865768 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-03-07 00:49:16.865772 | orchestrator | Saturday 07 March 2026 00:47:42 +0000 (0:00:03.189) 0:00:58.059 ******** 2026-03-07 00:49:16.865776 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-07 00:49:16.865780 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-07 00:49:16.865784 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-07 00:49:16.865790 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-07 00:49:16.865793 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-07 00:49:16.865797 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-07 00:49:16.865801 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-07 00:49:16.865805 | orchestrator | 2026-03-07 00:49:16.865808 | orchestrator | TASK [common : Check common containers] **************************************** 2026-03-07 00:49:16.865812 | orchestrator | Saturday 07 March 2026 00:47:45 +0000 (0:00:03.222) 0:01:01.282 ******** 2026-03-07 00:49:16.865816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-07 00:49:16.865820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-07 00:49:16.865824 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-07 00:49:16.865828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:16.865832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:16.865844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-07 00:49:16.865852 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:16.865856 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-07 00:49:16.865860 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-07 00:49:16.865864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:16.865868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:16.865872 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-07 00:49:16.865880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:16.865886 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:16.865893 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:16.865897 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:16.865901 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:16.865905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:16.865909 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:16.865917 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:16.865921 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:49:16.865925 | orchestrator | 2026-03-07 00:49:16.865928 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-03-07 00:49:16.865932 | orchestrator | Saturday 07 March 2026 00:47:50 +0000 (0:00:04.824) 0:01:06.106 ******** 2026-03-07 00:49:16.865938 | orchestrator | changed: [testbed-manager] 2026-03-07 00:49:16.865942 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:49:16.865946 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:49:16.865950 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:49:16.865954 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:49:16.865957 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:49:16.865961 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:49:16.865965 | orchestrator | 2026-03-07 00:49:16.865969 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-03-07 00:49:16.865972 | orchestrator | Saturday 07 March 2026 00:47:52 +0000 (0:00:01.926) 0:01:08.033 ******** 2026-03-07 00:49:16.865999 | orchestrator | changed: [testbed-manager] 2026-03-07 00:49:16.866003 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:49:16.866006 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:49:16.866011 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:49:16.866070 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:49:16.866075 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:49:16.866079 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:49:16.866084 | orchestrator | 2026-03-07 00:49:16.866088 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-07 00:49:16.866092 | orchestrator | Saturday 07 March 2026 00:47:53 +0000 (0:00:01.444) 0:01:09.478 ******** 2026-03-07 00:49:16.866097 | orchestrator | 2026-03-07 00:49:16.866102 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-07 00:49:16.866106 | orchestrator | Saturday 07 March 2026 00:47:53 +0000 (0:00:00.080) 0:01:09.558 ******** 2026-03-07 00:49:16.866110 | orchestrator | 2026-03-07 00:49:16.866115 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-07 00:49:16.866119 | orchestrator | Saturday 07 March 2026 00:47:53 +0000 (0:00:00.080) 0:01:09.638 ******** 2026-03-07 00:49:16.866123 | orchestrator | 2026-03-07 00:49:16.866127 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-07 00:49:16.866132 | orchestrator | Saturday 07 March 2026 00:47:54 +0000 (0:00:00.416) 0:01:10.055 ******** 2026-03-07 00:49:16.866136 | orchestrator | 2026-03-07 00:49:16.866140 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-07 00:49:16.866145 | orchestrator | Saturday 07 March 2026 00:47:54 +0000 (0:00:00.113) 0:01:10.168 ******** 2026-03-07 00:49:16.866149 | orchestrator | 2026-03-07 00:49:16.866154 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-07 00:49:16.866158 | orchestrator | Saturday 07 March 2026 00:47:54 +0000 (0:00:00.069) 0:01:10.238 ******** 2026-03-07 00:49:16.866167 | orchestrator | 2026-03-07 00:49:16.866172 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-07 00:49:16.866176 | orchestrator | Saturday 07 March 2026 00:47:54 +0000 (0:00:00.074) 0:01:10.312 ******** 2026-03-07 00:49:16.866180 | orchestrator | 2026-03-07 00:49:16.866183 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-03-07 00:49:16.866187 | orchestrator | Saturday 07 March 2026 00:47:54 +0000 (0:00:00.098) 0:01:10.411 ******** 2026-03-07 00:49:16.866191 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:49:16.866195 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:49:16.866198 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:49:16.866202 | orchestrator | changed: [testbed-manager] 2026-03-07 00:49:16.866206 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:49:16.866210 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:49:16.866213 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:49:16.866217 | orchestrator | 2026-03-07 00:49:16.866221 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-03-07 00:49:16.866225 | orchestrator | Saturday 07 March 2026 00:48:28 +0000 (0:00:34.217) 0:01:44.629 ******** 2026-03-07 00:49:16.866229 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:49:16.866232 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:49:16.866236 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:49:16.866240 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:49:16.866243 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:49:16.866247 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:49:16.866251 | orchestrator | changed: [testbed-manager] 2026-03-07 00:49:16.866255 | orchestrator | 2026-03-07 00:49:16.866258 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-03-07 00:49:16.866262 | orchestrator | Saturday 07 March 2026 00:49:00 +0000 (0:00:31.502) 0:02:16.131 ******** 2026-03-07 00:49:16.866266 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:49:16.866270 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:49:16.866273 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:49:16.866277 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:49:16.866281 | orchestrator | ok: [testbed-manager] 2026-03-07 00:49:16.866284 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:49:16.866288 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:49:16.866292 | orchestrator | 2026-03-07 00:49:16.866296 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-03-07 00:49:16.866300 | orchestrator | Saturday 07 March 2026 00:49:03 +0000 (0:00:03.168) 0:02:19.301 ******** 2026-03-07 00:49:16.866303 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:49:16.866307 | orchestrator | changed: [testbed-manager] 2026-03-07 00:49:16.866311 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:49:16.866315 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:49:16.866318 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:49:16.866322 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:49:16.866326 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:49:16.866330 | orchestrator | 2026-03-07 00:49:16.866333 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:49:16.866338 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-07 00:49:16.866343 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-07 00:49:16.866347 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-07 00:49:16.866357 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-07 00:49:16.866382 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-07 00:49:16.866404 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-07 00:49:16.866421 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-07 00:49:16.866428 | orchestrator | 2026-03-07 00:49:16.866434 | orchestrator | 2026-03-07 00:49:16.866439 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:49:16.866445 | orchestrator | Saturday 07 March 2026 00:49:14 +0000 (0:00:10.682) 0:02:29.983 ******** 2026-03-07 00:49:16.866451 | orchestrator | =============================================================================== 2026-03-07 00:49:16.866457 | orchestrator | common : Restart fluentd container ------------------------------------- 34.22s 2026-03-07 00:49:16.866464 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 31.50s 2026-03-07 00:49:16.866468 | orchestrator | common : Restart cron container ---------------------------------------- 10.68s 2026-03-07 00:49:16.866471 | orchestrator | common : Copying over config.json files for services -------------------- 8.71s 2026-03-07 00:49:16.866475 | orchestrator | common : Copying over cron logrotate config file ------------------------ 6.70s 2026-03-07 00:49:16.866479 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.81s 2026-03-07 00:49:16.866483 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 5.33s 2026-03-07 00:49:16.866486 | orchestrator | common : Check common containers ---------------------------------------- 4.82s 2026-03-07 00:49:16.866490 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.51s 2026-03-07 00:49:16.866494 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 4.07s 2026-03-07 00:49:16.866498 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.50s 2026-03-07 00:49:16.866501 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.22s 2026-03-07 00:49:16.866505 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.19s 2026-03-07 00:49:16.866509 | orchestrator | common : Initializing toolbox container using normal user --------------- 3.17s 2026-03-07 00:49:16.866513 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.76s 2026-03-07 00:49:16.866517 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 2.42s 2026-03-07 00:49:16.866521 | orchestrator | common : Copying over /run subdirectories conf -------------------------- 2.28s 2026-03-07 00:49:16.866525 | orchestrator | common : Creating log volume -------------------------------------------- 1.93s 2026-03-07 00:49:16.866528 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.73s 2026-03-07 00:49:16.866532 | orchestrator | common : include_tasks -------------------------------------------------- 1.45s 2026-03-07 00:49:16.866536 | orchestrator | 2026-03-07 00:49:16 | INFO  | Task 18c0af18-50b1-4552-9559-43d06a4c0b98 is in state STARTED 2026-03-07 00:49:16.866541 | orchestrator | 2026-03-07 00:49:16 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:49:19.917959 | orchestrator | 2026-03-07 00:49:19 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:49:19.921523 | orchestrator | 2026-03-07 00:49:19 | INFO  | Task d9a47cf9-8182-4ed0-b1d3-0bc40536f5f2 is in state STARTED 2026-03-07 00:49:19.922225 | orchestrator | 2026-03-07 00:49:19 | INFO  | Task b3e81d0a-f288-46d9-88a8-bc7df06a566e is in state STARTED 2026-03-07 00:49:19.923056 | orchestrator | 2026-03-07 00:49:19 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:49:19.924149 | orchestrator | 2026-03-07 00:49:19 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:49:19.925063 | orchestrator | 2026-03-07 00:49:19 | INFO  | Task 18c0af18-50b1-4552-9559-43d06a4c0b98 is in state STARTED 2026-03-07 00:49:19.925118 | orchestrator | 2026-03-07 00:49:19 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:49:22.959592 | orchestrator | 2026-03-07 00:49:22 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:49:22.960321 | orchestrator | 2026-03-07 00:49:22 | INFO  | Task d9a47cf9-8182-4ed0-b1d3-0bc40536f5f2 is in state STARTED 2026-03-07 00:49:22.960926 | orchestrator | 2026-03-07 00:49:22 | INFO  | Task b3e81d0a-f288-46d9-88a8-bc7df06a566e is in state STARTED 2026-03-07 00:49:22.961768 | orchestrator | 2026-03-07 00:49:22 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:49:22.962676 | orchestrator | 2026-03-07 00:49:22 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:49:22.963085 | orchestrator | 2026-03-07 00:49:22 | INFO  | Task 18c0af18-50b1-4552-9559-43d06a4c0b98 is in state STARTED 2026-03-07 00:49:22.963137 | orchestrator | 2026-03-07 00:49:22 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:49:25.985734 | orchestrator | 2026-03-07 00:49:25 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:49:25.985976 | orchestrator | 2026-03-07 00:49:25 | INFO  | Task d9a47cf9-8182-4ed0-b1d3-0bc40536f5f2 is in state STARTED 2026-03-07 00:49:25.986979 | orchestrator | 2026-03-07 00:49:25 | INFO  | Task b3e81d0a-f288-46d9-88a8-bc7df06a566e is in state STARTED 2026-03-07 00:49:25.987484 | orchestrator | 2026-03-07 00:49:25 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:49:25.988181 | orchestrator | 2026-03-07 00:49:25 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:49:25.988912 | orchestrator | 2026-03-07 00:49:25 | INFO  | Task 18c0af18-50b1-4552-9559-43d06a4c0b98 is in state STARTED 2026-03-07 00:49:25.988944 | orchestrator | 2026-03-07 00:49:25 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:49:29.036923 | orchestrator | 2026-03-07 00:49:29 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:49:29.037257 | orchestrator | 2026-03-07 00:49:29 | INFO  | Task d9a47cf9-8182-4ed0-b1d3-0bc40536f5f2 is in state STARTED 2026-03-07 00:49:29.038251 | orchestrator | 2026-03-07 00:49:29 | INFO  | Task b3e81d0a-f288-46d9-88a8-bc7df06a566e is in state STARTED 2026-03-07 00:49:29.039120 | orchestrator | 2026-03-07 00:49:29 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:49:29.040051 | orchestrator | 2026-03-07 00:49:29 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:49:29.040806 | orchestrator | 2026-03-07 00:49:29 | INFO  | Task 18c0af18-50b1-4552-9559-43d06a4c0b98 is in state STARTED 2026-03-07 00:49:29.040840 | orchestrator | 2026-03-07 00:49:29 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:49:32.086444 | orchestrator | 2026-03-07 00:49:32 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:49:32.087994 | orchestrator | 2026-03-07 00:49:32 | INFO  | Task d9a47cf9-8182-4ed0-b1d3-0bc40536f5f2 is in state STARTED 2026-03-07 00:49:32.089745 | orchestrator | 2026-03-07 00:49:32 | INFO  | Task b3e81d0a-f288-46d9-88a8-bc7df06a566e is in state STARTED 2026-03-07 00:49:32.091095 | orchestrator | 2026-03-07 00:49:32 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:49:32.092384 | orchestrator | 2026-03-07 00:49:32 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:49:32.093716 | orchestrator | 2026-03-07 00:49:32 | INFO  | Task 18c0af18-50b1-4552-9559-43d06a4c0b98 is in state STARTED 2026-03-07 00:49:32.093786 | orchestrator | 2026-03-07 00:49:32 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:49:35.153171 | orchestrator | 2026-03-07 00:49:35 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:49:35.153259 | orchestrator | 2026-03-07 00:49:35 | INFO  | Task d9a47cf9-8182-4ed0-b1d3-0bc40536f5f2 is in state STARTED 2026-03-07 00:49:35.155980 | orchestrator | 2026-03-07 00:49:35 | INFO  | Task b3e81d0a-f288-46d9-88a8-bc7df06a566e is in state STARTED 2026-03-07 00:49:35.157623 | orchestrator | 2026-03-07 00:49:35 | INFO  | Task 5f6e5475-889c-4983-a19f-0115efb73f51 is in state STARTED 2026-03-07 00:49:35.158472 | orchestrator | 2026-03-07 00:49:35 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:49:35.160152 | orchestrator | 2026-03-07 00:49:35 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:49:35.162531 | orchestrator | 2026-03-07 00:49:35 | INFO  | Task 18c0af18-50b1-4552-9559-43d06a4c0b98 is in state SUCCESS 2026-03-07 00:49:35.162619 | orchestrator | 2026-03-07 00:49:35 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:49:38.295864 | orchestrator | 2026-03-07 00:49:38 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:49:38.300645 | orchestrator | 2026-03-07 00:49:38 | INFO  | Task d9a47cf9-8182-4ed0-b1d3-0bc40536f5f2 is in state STARTED 2026-03-07 00:49:38.301037 | orchestrator | 2026-03-07 00:49:38 | INFO  | Task b3e81d0a-f288-46d9-88a8-bc7df06a566e is in state STARTED 2026-03-07 00:49:38.303005 | orchestrator | 2026-03-07 00:49:38 | INFO  | Task 5f6e5475-889c-4983-a19f-0115efb73f51 is in state STARTED 2026-03-07 00:49:38.312687 | orchestrator | 2026-03-07 00:49:38 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:49:38.312772 | orchestrator | 2026-03-07 00:49:38 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:49:38.312782 | orchestrator | 2026-03-07 00:49:38 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:49:41.357827 | orchestrator | 2026-03-07 00:49:41 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:49:41.357964 | orchestrator | 2026-03-07 00:49:41 | INFO  | Task d9a47cf9-8182-4ed0-b1d3-0bc40536f5f2 is in state STARTED 2026-03-07 00:49:41.357995 | orchestrator | 2026-03-07 00:49:41 | INFO  | Task b3e81d0a-f288-46d9-88a8-bc7df06a566e is in state STARTED 2026-03-07 00:49:41.358487 | orchestrator | 2026-03-07 00:49:41 | INFO  | Task 5f6e5475-889c-4983-a19f-0115efb73f51 is in state STARTED 2026-03-07 00:49:41.359708 | orchestrator | 2026-03-07 00:49:41 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:49:41.360842 | orchestrator | 2026-03-07 00:49:41 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:49:41.360932 | orchestrator | 2026-03-07 00:49:41 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:49:44.385911 | orchestrator | 2026-03-07 00:49:44 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:49:44.386239 | orchestrator | 2026-03-07 00:49:44 | INFO  | Task d9a47cf9-8182-4ed0-b1d3-0bc40536f5f2 is in state STARTED 2026-03-07 00:49:44.386990 | orchestrator | 2026-03-07 00:49:44 | INFO  | Task b3e81d0a-f288-46d9-88a8-bc7df06a566e is in state STARTED 2026-03-07 00:49:44.387569 | orchestrator | 2026-03-07 00:49:44 | INFO  | Task 5f6e5475-889c-4983-a19f-0115efb73f51 is in state STARTED 2026-03-07 00:49:44.388329 | orchestrator | 2026-03-07 00:49:44 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:49:44.389210 | orchestrator | 2026-03-07 00:49:44 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:49:44.389268 | orchestrator | 2026-03-07 00:49:44 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:49:47.494938 | orchestrator | 2026-03-07 00:49:47 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:49:47.503302 | orchestrator | 2026-03-07 00:49:47.503391 | orchestrator | 2026-03-07 00:49:47.503406 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-07 00:49:47.503419 | orchestrator | 2026-03-07 00:49:47.503430 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-07 00:49:47.503441 | orchestrator | Saturday 07 March 2026 00:49:22 +0000 (0:00:00.500) 0:00:00.500 ******** 2026-03-07 00:49:47.503452 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:49:47.503464 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:49:47.503475 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:49:47.503486 | orchestrator | 2026-03-07 00:49:47.503497 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-07 00:49:47.503508 | orchestrator | Saturday 07 March 2026 00:49:23 +0000 (0:00:00.511) 0:00:01.012 ******** 2026-03-07 00:49:47.503519 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-03-07 00:49:47.503530 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-03-07 00:49:47.503541 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-03-07 00:49:47.503551 | orchestrator | 2026-03-07 00:49:47.503562 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-03-07 00:49:47.503573 | orchestrator | 2026-03-07 00:49:47.503583 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-03-07 00:49:47.503594 | orchestrator | Saturday 07 March 2026 00:49:23 +0000 (0:00:00.565) 0:00:01.578 ******** 2026-03-07 00:49:47.503605 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:49:47.503616 | orchestrator | 2026-03-07 00:49:47.503627 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-03-07 00:49:47.503638 | orchestrator | Saturday 07 March 2026 00:49:24 +0000 (0:00:00.817) 0:00:02.395 ******** 2026-03-07 00:49:47.503648 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-07 00:49:47.503659 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-07 00:49:47.503670 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-07 00:49:47.503681 | orchestrator | 2026-03-07 00:49:47.503692 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-03-07 00:49:47.503704 | orchestrator | Saturday 07 March 2026 00:49:25 +0000 (0:00:00.911) 0:00:03.306 ******** 2026-03-07 00:49:47.503715 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-07 00:49:47.503727 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-07 00:49:47.503739 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-07 00:49:47.503750 | orchestrator | 2026-03-07 00:49:47.503762 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-03-07 00:49:47.503773 | orchestrator | Saturday 07 March 2026 00:49:27 +0000 (0:00:02.248) 0:00:05.554 ******** 2026-03-07 00:49:47.503785 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:49:47.503796 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:49:47.503807 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:49:47.503819 | orchestrator | 2026-03-07 00:49:47.503830 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-03-07 00:49:47.503842 | orchestrator | Saturday 07 March 2026 00:49:29 +0000 (0:00:02.096) 0:00:07.650 ******** 2026-03-07 00:49:47.503853 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:49:47.503865 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:49:47.503876 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:49:47.503887 | orchestrator | 2026-03-07 00:49:47.503920 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:49:47.503933 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:49:47.503946 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:49:47.503957 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:49:47.503969 | orchestrator | 2026-03-07 00:49:47.503980 | orchestrator | 2026-03-07 00:49:47.503992 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:49:47.504003 | orchestrator | Saturday 07 March 2026 00:49:32 +0000 (0:00:03.249) 0:00:10.900 ******** 2026-03-07 00:49:47.504014 | orchestrator | =============================================================================== 2026-03-07 00:49:47.504026 | orchestrator | memcached : Restart memcached container --------------------------------- 3.25s 2026-03-07 00:49:47.504037 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.25s 2026-03-07 00:49:47.504049 | orchestrator | memcached : Check memcached container ----------------------------------- 2.10s 2026-03-07 00:49:47.504060 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.91s 2026-03-07 00:49:47.504093 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.82s 2026-03-07 00:49:47.504105 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.57s 2026-03-07 00:49:47.504117 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.51s 2026-03-07 00:49:47.504128 | orchestrator | 2026-03-07 00:49:47.504138 | orchestrator | 2026-03-07 00:49:47.504149 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-07 00:49:47.504160 | orchestrator | 2026-03-07 00:49:47.504170 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-07 00:49:47.504181 | orchestrator | Saturday 07 March 2026 00:49:22 +0000 (0:00:00.244) 0:00:00.244 ******** 2026-03-07 00:49:47.504191 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:49:47.504202 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:49:47.504213 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:49:47.504224 | orchestrator | 2026-03-07 00:49:47.504234 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-07 00:49:47.504264 | orchestrator | Saturday 07 March 2026 00:49:22 +0000 (0:00:00.366) 0:00:00.611 ******** 2026-03-07 00:49:47.504277 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-03-07 00:49:47.504288 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-03-07 00:49:47.504299 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-03-07 00:49:47.504311 | orchestrator | 2026-03-07 00:49:47.504322 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-03-07 00:49:47.504333 | orchestrator | 2026-03-07 00:49:47.504344 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-03-07 00:49:47.504367 | orchestrator | Saturday 07 March 2026 00:49:23 +0000 (0:00:00.741) 0:00:01.353 ******** 2026-03-07 00:49:47.504379 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:49:47.504390 | orchestrator | 2026-03-07 00:49:47.504418 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-03-07 00:49:47.504430 | orchestrator | Saturday 07 March 2026 00:49:23 +0000 (0:00:00.572) 0:00:01.925 ******** 2026-03-07 00:49:47.504444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-07 00:49:47.504469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-07 00:49:47.504482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-07 00:49:47.504499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-07 00:49:47.504512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-07 00:49:47.504536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-07 00:49:47.504549 | orchestrator | 2026-03-07 00:49:47.504561 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-03-07 00:49:47.504573 | orchestrator | Saturday 07 March 2026 00:49:25 +0000 (0:00:01.534) 0:00:03.459 ******** 2026-03-07 00:49:47.504585 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-07 00:49:47.504605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-07 00:49:47.504618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-07 00:49:47.504635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-07 00:49:47.504648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-07 00:49:47.504674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-07 00:49:47.504686 | orchestrator | 2026-03-07 00:49:47.504697 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-03-07 00:49:47.504709 | orchestrator | Saturday 07 March 2026 00:49:28 +0000 (0:00:03.137) 0:00:06.597 ******** 2026-03-07 00:49:47.504721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-07 00:49:47.504738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-07 00:49:47.504750 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-07 00:49:47.504767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-07 00:49:47.504779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-07 00:49:47.504797 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-07 00:49:47.504809 | orchestrator | 2026-03-07 00:49:47.504824 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-03-07 00:49:47.504844 | orchestrator | Saturday 07 March 2026 00:49:31 +0000 (0:00:03.246) 0:00:09.845 ******** 2026-03-07 00:49:47.504884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-07 00:49:47.504910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-07 00:49:47.504929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-07 00:49:47.504969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-07 00:49:47.504993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-07 00:49:47.505025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-07 00:49:47.505045 | orchestrator | 2026-03-07 00:49:47.505136 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-07 00:49:47.505152 | orchestrator | Saturday 07 March 2026 00:49:34 +0000 (0:00:02.337) 0:00:12.183 ******** 2026-03-07 00:49:47.505163 | orchestrator | 2026-03-07 00:49:47.505175 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-07 00:49:47.505186 | orchestrator | Saturday 07 March 2026 00:49:34 +0000 (0:00:00.182) 0:00:12.365 ******** 2026-03-07 00:49:47.505196 | orchestrator | 2026-03-07 00:49:47.505207 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-07 00:49:47.505218 | orchestrator | Saturday 07 March 2026 00:49:34 +0000 (0:00:00.162) 0:00:12.528 ******** 2026-03-07 00:49:47.505229 | orchestrator | 2026-03-07 00:49:47.505240 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-03-07 00:49:47.505251 | orchestrator | Saturday 07 March 2026 00:49:34 +0000 (0:00:00.337) 0:00:12.865 ******** 2026-03-07 00:49:47.505262 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:49:47.505273 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:49:47.505283 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:49:47.505294 | orchestrator | 2026-03-07 00:49:47.505305 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-03-07 00:49:47.505316 | orchestrator | Saturday 07 March 2026 00:49:39 +0000 (0:00:05.126) 0:00:17.992 ******** 2026-03-07 00:49:47.505327 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:49:47.505338 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:49:47.505349 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:49:47.505366 | orchestrator | 2026-03-07 00:49:47.505384 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:49:47.505404 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:49:47.505422 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:49:47.505440 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:49:47.505459 | orchestrator | 2026-03-07 00:49:47.505477 | orchestrator | 2026-03-07 00:49:47.505497 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:49:47.505516 | orchestrator | Saturday 07 March 2026 00:49:44 +0000 (0:00:04.442) 0:00:22.434 ******** 2026-03-07 00:49:47.505534 | orchestrator | =============================================================================== 2026-03-07 00:49:47.505552 | orchestrator | redis : Restart redis container ----------------------------------------- 5.13s 2026-03-07 00:49:47.505570 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 4.44s 2026-03-07 00:49:47.505590 | orchestrator | redis : Copying over redis config files --------------------------------- 3.25s 2026-03-07 00:49:47.505610 | orchestrator | redis : Copying over default config.json files -------------------------- 3.14s 2026-03-07 00:49:47.505628 | orchestrator | redis : Check redis containers ------------------------------------------ 2.34s 2026-03-07 00:49:47.505646 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.53s 2026-03-07 00:49:47.505673 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.74s 2026-03-07 00:49:47.505694 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.68s 2026-03-07 00:49:47.505714 | orchestrator | redis : include_tasks --------------------------------------------------- 0.57s 2026-03-07 00:49:47.505732 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.37s 2026-03-07 00:49:47.505751 | orchestrator | 2026-03-07 00:49:47 | INFO  | Task d9a47cf9-8182-4ed0-b1d3-0bc40536f5f2 is in state SUCCESS 2026-03-07 00:49:47.524781 | orchestrator | 2026-03-07 00:49:47 | INFO  | Task b3e81d0a-f288-46d9-88a8-bc7df06a566e is in state STARTED 2026-03-07 00:49:47.530806 | orchestrator | 2026-03-07 00:49:47 | INFO  | Task 5f6e5475-889c-4983-a19f-0115efb73f51 is in state STARTED 2026-03-07 00:49:47.533684 | orchestrator | 2026-03-07 00:49:47 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:49:47.534500 | orchestrator | 2026-03-07 00:49:47 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:49:47.534529 | orchestrator | 2026-03-07 00:49:47 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:49:50.639916 | orchestrator | 2026-03-07 00:49:50 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:49:50.640568 | orchestrator | 2026-03-07 00:49:50 | INFO  | Task b3e81d0a-f288-46d9-88a8-bc7df06a566e is in state STARTED 2026-03-07 00:49:50.641481 | orchestrator | 2026-03-07 00:49:50 | INFO  | Task 5f6e5475-889c-4983-a19f-0115efb73f51 is in state STARTED 2026-03-07 00:49:50.645238 | orchestrator | 2026-03-07 00:49:50 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:49:50.646259 | orchestrator | 2026-03-07 00:49:50 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:49:50.646307 | orchestrator | 2026-03-07 00:49:50 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:49:53.702201 | orchestrator | 2026-03-07 00:49:53 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:49:53.703949 | orchestrator | 2026-03-07 00:49:53 | INFO  | Task b3e81d0a-f288-46d9-88a8-bc7df06a566e is in state STARTED 2026-03-07 00:49:53.704899 | orchestrator | 2026-03-07 00:49:53 | INFO  | Task 5f6e5475-889c-4983-a19f-0115efb73f51 is in state STARTED 2026-03-07 00:49:53.705921 | orchestrator | 2026-03-07 00:49:53 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:49:53.708418 | orchestrator | 2026-03-07 00:49:53 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:49:53.708473 | orchestrator | 2026-03-07 00:49:53 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:49:56.766269 | orchestrator | 2026-03-07 00:49:56 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:49:56.766488 | orchestrator | 2026-03-07 00:49:56 | INFO  | Task b3e81d0a-f288-46d9-88a8-bc7df06a566e is in state STARTED 2026-03-07 00:49:56.766518 | orchestrator | 2026-03-07 00:49:56 | INFO  | Task 5f6e5475-889c-4983-a19f-0115efb73f51 is in state STARTED 2026-03-07 00:49:56.766540 | orchestrator | 2026-03-07 00:49:56 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:49:56.766560 | orchestrator | 2026-03-07 00:49:56 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:49:56.766580 | orchestrator | 2026-03-07 00:49:56 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:49:59.805794 | orchestrator | 2026-03-07 00:49:59 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:49:59.806311 | orchestrator | 2026-03-07 00:49:59 | INFO  | Task b3e81d0a-f288-46d9-88a8-bc7df06a566e is in state STARTED 2026-03-07 00:49:59.807416 | orchestrator | 2026-03-07 00:49:59 | INFO  | Task 5f6e5475-889c-4983-a19f-0115efb73f51 is in state STARTED 2026-03-07 00:49:59.807955 | orchestrator | 2026-03-07 00:49:59 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:49:59.808816 | orchestrator | 2026-03-07 00:49:59 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:49:59.808841 | orchestrator | 2026-03-07 00:49:59 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:50:02.893076 | orchestrator | 2026-03-07 00:50:02 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:50:02.893530 | orchestrator | 2026-03-07 00:50:02 | INFO  | Task b3e81d0a-f288-46d9-88a8-bc7df06a566e is in state STARTED 2026-03-07 00:50:02.895588 | orchestrator | 2026-03-07 00:50:02 | INFO  | Task 5f6e5475-889c-4983-a19f-0115efb73f51 is in state STARTED 2026-03-07 00:50:02.900608 | orchestrator | 2026-03-07 00:50:02 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:50:02.900702 | orchestrator | 2026-03-07 00:50:02 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:50:02.900720 | orchestrator | 2026-03-07 00:50:02 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:50:05.970623 | orchestrator | 2026-03-07 00:50:05 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:50:05.977441 | orchestrator | 2026-03-07 00:50:05 | INFO  | Task b3e81d0a-f288-46d9-88a8-bc7df06a566e is in state STARTED 2026-03-07 00:50:05.981621 | orchestrator | 2026-03-07 00:50:05 | INFO  | Task 5f6e5475-889c-4983-a19f-0115efb73f51 is in state STARTED 2026-03-07 00:50:05.982077 | orchestrator | 2026-03-07 00:50:05 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:50:05.983087 | orchestrator | 2026-03-07 00:50:05 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:50:05.983113 | orchestrator | 2026-03-07 00:50:05 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:50:09.076780 | orchestrator | 2026-03-07 00:50:09 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:50:09.077078 | orchestrator | 2026-03-07 00:50:09 | INFO  | Task b3e81d0a-f288-46d9-88a8-bc7df06a566e is in state STARTED 2026-03-07 00:50:09.085276 | orchestrator | 2026-03-07 00:50:09 | INFO  | Task 5f6e5475-889c-4983-a19f-0115efb73f51 is in state STARTED 2026-03-07 00:50:09.085868 | orchestrator | 2026-03-07 00:50:09 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:50:09.087313 | orchestrator | 2026-03-07 00:50:09 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:50:09.087365 | orchestrator | 2026-03-07 00:50:09 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:50:12.157703 | orchestrator | 2026-03-07 00:50:12 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:50:12.158566 | orchestrator | 2026-03-07 00:50:12 | INFO  | Task b3e81d0a-f288-46d9-88a8-bc7df06a566e is in state STARTED 2026-03-07 00:50:12.159379 | orchestrator | 2026-03-07 00:50:12 | INFO  | Task 5f6e5475-889c-4983-a19f-0115efb73f51 is in state STARTED 2026-03-07 00:50:12.160271 | orchestrator | 2026-03-07 00:50:12 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:50:12.162722 | orchestrator | 2026-03-07 00:50:12 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:50:12.162791 | orchestrator | 2026-03-07 00:50:12 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:50:15.278567 | orchestrator | 2026-03-07 00:50:15 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:50:15.278660 | orchestrator | 2026-03-07 00:50:15 | INFO  | Task b3e81d0a-f288-46d9-88a8-bc7df06a566e is in state STARTED 2026-03-07 00:50:15.278671 | orchestrator | 2026-03-07 00:50:15 | INFO  | Task 5f6e5475-889c-4983-a19f-0115efb73f51 is in state STARTED 2026-03-07 00:50:15.278679 | orchestrator | 2026-03-07 00:50:15 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:50:15.278687 | orchestrator | 2026-03-07 00:50:15 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:50:15.278722 | orchestrator | 2026-03-07 00:50:15 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:50:18.353062 | orchestrator | 2026-03-07 00:50:18 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:50:18.355913 | orchestrator | 2026-03-07 00:50:18 | INFO  | Task b3e81d0a-f288-46d9-88a8-bc7df06a566e is in state STARTED 2026-03-07 00:50:18.358582 | orchestrator | 2026-03-07 00:50:18 | INFO  | Task 5f6e5475-889c-4983-a19f-0115efb73f51 is in state STARTED 2026-03-07 00:50:18.360732 | orchestrator | 2026-03-07 00:50:18 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:50:18.362072 | orchestrator | 2026-03-07 00:50:18 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:50:18.362118 | orchestrator | 2026-03-07 00:50:18 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:50:21.505919 | orchestrator | 2026-03-07 00:50:21 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:50:21.506630 | orchestrator | 2026-03-07 00:50:21 | INFO  | Task b3e81d0a-f288-46d9-88a8-bc7df06a566e is in state STARTED 2026-03-07 00:50:21.507771 | orchestrator | 2026-03-07 00:50:21 | INFO  | Task 5f6e5475-889c-4983-a19f-0115efb73f51 is in state STARTED 2026-03-07 00:50:21.508679 | orchestrator | 2026-03-07 00:50:21 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:50:21.511459 | orchestrator | 2026-03-07 00:50:21 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:50:21.512744 | orchestrator | 2026-03-07 00:50:21 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:50:24.564651 | orchestrator | 2026-03-07 00:50:24 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:50:24.565947 | orchestrator | 2026-03-07 00:50:24 | INFO  | Task b3e81d0a-f288-46d9-88a8-bc7df06a566e is in state STARTED 2026-03-07 00:50:24.566000 | orchestrator | 2026-03-07 00:50:24 | INFO  | Task 5f6e5475-889c-4983-a19f-0115efb73f51 is in state STARTED 2026-03-07 00:50:24.567399 | orchestrator | 2026-03-07 00:50:24 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:50:24.568740 | orchestrator | 2026-03-07 00:50:24 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:50:24.568785 | orchestrator | 2026-03-07 00:50:24 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:50:27.644274 | orchestrator | 2026-03-07 00:50:27 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:50:27.646157 | orchestrator | 2026-03-07 00:50:27 | INFO  | Task b3e81d0a-f288-46d9-88a8-bc7df06a566e is in state STARTED 2026-03-07 00:50:27.648642 | orchestrator | 2026-03-07 00:50:27 | INFO  | Task 5f6e5475-889c-4983-a19f-0115efb73f51 is in state STARTED 2026-03-07 00:50:27.649616 | orchestrator | 2026-03-07 00:50:27 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:50:27.650641 | orchestrator | 2026-03-07 00:50:27 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:50:27.650970 | orchestrator | 2026-03-07 00:50:27 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:50:30.691765 | orchestrator | 2026-03-07 00:50:30 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:50:30.691861 | orchestrator | 2026-03-07 00:50:30 | INFO  | Task b3e81d0a-f288-46d9-88a8-bc7df06a566e is in state STARTED 2026-03-07 00:50:30.691871 | orchestrator | 2026-03-07 00:50:30 | INFO  | Task 5f6e5475-889c-4983-a19f-0115efb73f51 is in state STARTED 2026-03-07 00:50:30.691906 | orchestrator | 2026-03-07 00:50:30 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:50:30.691914 | orchestrator | 2026-03-07 00:50:30 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:50:30.691922 | orchestrator | 2026-03-07 00:50:30 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:50:33.731735 | orchestrator | 2026-03-07 00:50:33 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:50:33.733624 | orchestrator | 2026-03-07 00:50:33 | INFO  | Task b3e81d0a-f288-46d9-88a8-bc7df06a566e is in state STARTED 2026-03-07 00:50:33.735160 | orchestrator | 2026-03-07 00:50:33 | INFO  | Task 5f6e5475-889c-4983-a19f-0115efb73f51 is in state STARTED 2026-03-07 00:50:33.738633 | orchestrator | 2026-03-07 00:50:33 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:50:33.741133 | orchestrator | 2026-03-07 00:50:33 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:50:33.741238 | orchestrator | 2026-03-07 00:50:33 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:50:36.782430 | orchestrator | 2026-03-07 00:50:36 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:50:36.782696 | orchestrator | 2026-03-07 00:50:36 | INFO  | Task b3e81d0a-f288-46d9-88a8-bc7df06a566e is in state STARTED 2026-03-07 00:50:36.784816 | orchestrator | 2026-03-07 00:50:36 | INFO  | Task 5f6e5475-889c-4983-a19f-0115efb73f51 is in state STARTED 2026-03-07 00:50:36.786222 | orchestrator | 2026-03-07 00:50:36 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:50:36.786847 | orchestrator | 2026-03-07 00:50:36 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:50:36.786871 | orchestrator | 2026-03-07 00:50:36 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:50:39.831141 | orchestrator | 2026-03-07 00:50:39 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:50:39.832191 | orchestrator | 2026-03-07 00:50:39 | INFO  | Task dcab45ef-8630-471e-94a6-64ca26753e28 is in state STARTED 2026-03-07 00:50:39.833992 | orchestrator | 2026-03-07 00:50:39 | INFO  | Task b3e81d0a-f288-46d9-88a8-bc7df06a566e is in state SUCCESS 2026-03-07 00:50:39.836111 | orchestrator | 2026-03-07 00:50:39.836150 | orchestrator | 2026-03-07 00:50:39.836158 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-07 00:50:39.836166 | orchestrator | 2026-03-07 00:50:39.836194 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-07 00:50:39.836202 | orchestrator | Saturday 07 March 2026 00:49:22 +0000 (0:00:00.350) 0:00:00.350 ******** 2026-03-07 00:50:39.836209 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:50:39.836333 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:50:39.836339 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:50:39.836344 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:50:39.836349 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:50:39.836353 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:50:39.836357 | orchestrator | 2026-03-07 00:50:39.836362 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-07 00:50:39.836366 | orchestrator | Saturday 07 March 2026 00:49:23 +0000 (0:00:01.175) 0:00:01.525 ******** 2026-03-07 00:50:39.836371 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-07 00:50:39.836375 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-07 00:50:39.836378 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-07 00:50:39.836382 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-07 00:50:39.836404 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-07 00:50:39.836408 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-07 00:50:39.836412 | orchestrator | 2026-03-07 00:50:39.836416 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-03-07 00:50:39.836419 | orchestrator | 2026-03-07 00:50:39.836423 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-03-07 00:50:39.836434 | orchestrator | Saturday 07 March 2026 00:49:24 +0000 (0:00:00.866) 0:00:02.392 ******** 2026-03-07 00:50:39.836439 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:50:39.836444 | orchestrator | 2026-03-07 00:50:39.836448 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-07 00:50:39.836452 | orchestrator | Saturday 07 March 2026 00:49:26 +0000 (0:00:01.416) 0:00:03.808 ******** 2026-03-07 00:50:39.836456 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-07 00:50:39.836460 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-07 00:50:39.836464 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-07 00:50:39.836468 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-07 00:50:39.836472 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-07 00:50:39.836475 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-07 00:50:39.836479 | orchestrator | 2026-03-07 00:50:39.836483 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-07 00:50:39.836487 | orchestrator | Saturday 07 March 2026 00:49:27 +0000 (0:00:01.323) 0:00:05.131 ******** 2026-03-07 00:50:39.836491 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-07 00:50:39.836494 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-07 00:50:39.836498 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-07 00:50:39.836502 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-07 00:50:39.836506 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-07 00:50:39.836509 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-07 00:50:39.836513 | orchestrator | 2026-03-07 00:50:39.836517 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-07 00:50:39.836521 | orchestrator | Saturday 07 March 2026 00:49:29 +0000 (0:00:02.025) 0:00:07.157 ******** 2026-03-07 00:50:39.836525 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-03-07 00:50:39.836529 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:50:39.836534 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-03-07 00:50:39.836537 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:50:39.836541 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-03-07 00:50:39.836545 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:50:39.836548 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-03-07 00:50:39.836552 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:50:39.836556 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-03-07 00:50:39.836559 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:50:39.836563 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-03-07 00:50:39.836567 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:50:39.836571 | orchestrator | 2026-03-07 00:50:39.836574 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-03-07 00:50:39.836578 | orchestrator | Saturday 07 March 2026 00:49:31 +0000 (0:00:02.313) 0:00:09.471 ******** 2026-03-07 00:50:39.836582 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:50:39.836585 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:50:39.836589 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:50:39.836593 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:50:39.836600 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:50:39.836604 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:50:39.836608 | orchestrator | 2026-03-07 00:50:39.836612 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-03-07 00:50:39.836615 | orchestrator | Saturday 07 March 2026 00:49:33 +0000 (0:00:01.674) 0:00:11.146 ******** 2026-03-07 00:50:39.836642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-07 00:50:39.836650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-07 00:50:39.836654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-07 00:50:39.836659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-07 00:50:39.836663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-07 00:50:39.836672 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-07 00:50:39.836680 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-07 00:50:39.836684 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-07 00:50:39.836688 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-07 00:50:39.836701 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-07 00:50:39.836705 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-07 00:50:39.836719 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-07 00:50:39.836723 | orchestrator | 2026-03-07 00:50:39.836727 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-03-07 00:50:39.836731 | orchestrator | Saturday 07 March 2026 00:49:37 +0000 (0:00:03.529) 0:00:14.675 ******** 2026-03-07 00:50:39.836735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-07 00:50:39.836739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-07 00:50:39.836743 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-07 00:50:39.836747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-07 00:50:39.836756 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-07 00:50:39.836763 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-07 00:50:39.836767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-07 00:50:39.836771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-07 00:50:39.836775 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-07 00:50:39.836782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-07 00:50:39.836791 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-07 00:50:39.836795 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-07 00:50:39.836799 | orchestrator | 2026-03-07 00:50:39.836803 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-03-07 00:50:39.836807 | orchestrator | Saturday 07 March 2026 00:49:40 +0000 (0:00:03.563) 0:00:18.239 ******** 2026-03-07 00:50:39.836810 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:50:39.836814 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:50:39.836818 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:50:39.836822 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:50:39.836825 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:50:39.836829 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:50:39.836833 | orchestrator | 2026-03-07 00:50:39.836837 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-03-07 00:50:39.836841 | orchestrator | Saturday 07 March 2026 00:49:41 +0000 (0:00:01.261) 0:00:19.500 ******** 2026-03-07 00:50:39.836845 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-07 00:50:39.836849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-07 00:50:39.836857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-07 00:50:39.836866 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-07 00:50:39.836870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-07 00:50:39.836874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-07 00:50:39.836878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-07 00:50:39.836886 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-07 00:50:39.836890 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-07 00:50:39.836901 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-07 00:50:39.836905 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-07 00:50:39.836909 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-07 00:50:39.836913 | orchestrator | 2026-03-07 00:50:39.836917 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-07 00:50:39.836921 | orchestrator | Saturday 07 March 2026 00:49:44 +0000 (0:00:03.018) 0:00:22.518 ******** 2026-03-07 00:50:39.836928 | orchestrator | 2026-03-07 00:50:39.836932 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-07 00:50:39.836936 | orchestrator | Saturday 07 March 2026 00:49:45 +0000 (0:00:00.173) 0:00:22.692 ******** 2026-03-07 00:50:39.836939 | orchestrator | 2026-03-07 00:50:39.836943 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-07 00:50:39.836947 | orchestrator | Saturday 07 March 2026 00:49:45 +0000 (0:00:00.147) 0:00:22.840 ******** 2026-03-07 00:50:39.836951 | orchestrator | 2026-03-07 00:50:39.836955 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-07 00:50:39.836958 | orchestrator | Saturday 07 March 2026 00:49:45 +0000 (0:00:00.245) 0:00:23.085 ******** 2026-03-07 00:50:39.836962 | orchestrator | 2026-03-07 00:50:39.836966 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-07 00:50:39.836970 | orchestrator | Saturday 07 March 2026 00:49:45 +0000 (0:00:00.261) 0:00:23.346 ******** 2026-03-07 00:50:39.836973 | orchestrator | 2026-03-07 00:50:39.836977 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-07 00:50:39.836981 | orchestrator | Saturday 07 March 2026 00:49:45 +0000 (0:00:00.239) 0:00:23.586 ******** 2026-03-07 00:50:39.836985 | orchestrator | 2026-03-07 00:50:39.836988 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-03-07 00:50:39.836992 | orchestrator | Saturday 07 March 2026 00:49:46 +0000 (0:00:00.145) 0:00:23.731 ******** 2026-03-07 00:50:39.836996 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:50:39.836999 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:50:39.837003 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:50:39.837007 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:50:39.837011 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:50:39.837014 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:50:39.837018 | orchestrator | 2026-03-07 00:50:39.837022 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-03-07 00:50:39.837026 | orchestrator | Saturday 07 March 2026 00:49:53 +0000 (0:00:07.459) 0:00:31.191 ******** 2026-03-07 00:50:39.837029 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:50:39.837033 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:50:39.837037 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:50:39.837041 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:50:39.837044 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:50:39.837048 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:50:39.837052 | orchestrator | 2026-03-07 00:50:39.837056 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-07 00:50:39.837059 | orchestrator | Saturday 07 March 2026 00:49:55 +0000 (0:00:01.935) 0:00:33.127 ******** 2026-03-07 00:50:39.837063 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:50:39.837067 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:50:39.837071 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:50:39.837074 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:50:39.837078 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:50:39.837082 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:50:39.837086 | orchestrator | 2026-03-07 00:50:39.837092 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-03-07 00:50:39.837096 | orchestrator | Saturday 07 March 2026 00:50:06 +0000 (0:00:10.961) 0:00:44.089 ******** 2026-03-07 00:50:39.837164 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-03-07 00:50:39.837169 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-03-07 00:50:39.837173 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-03-07 00:50:39.837177 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-03-07 00:50:39.837181 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-03-07 00:50:39.837191 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-03-07 00:50:39.837195 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-03-07 00:50:39.837199 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-03-07 00:50:39.837202 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-03-07 00:50:39.837206 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-03-07 00:50:39.837227 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-03-07 00:50:39.837233 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-03-07 00:50:39.837239 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-07 00:50:39.837245 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-07 00:50:39.837251 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-07 00:50:39.837256 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-07 00:50:39.837266 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-07 00:50:39.837274 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-07 00:50:39.837279 | orchestrator | 2026-03-07 00:50:39.837285 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-03-07 00:50:39.837291 | orchestrator | Saturday 07 March 2026 00:50:16 +0000 (0:00:10.094) 0:00:54.183 ******** 2026-03-07 00:50:39.837297 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-03-07 00:50:39.837303 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-03-07 00:50:39.837308 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-03-07 00:50:39.837314 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:50:39.837320 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:50:39.837326 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-03-07 00:50:39.837332 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-03-07 00:50:39.837338 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:50:39.837341 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-03-07 00:50:39.837345 | orchestrator | 2026-03-07 00:50:39.837349 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-03-07 00:50:39.837353 | orchestrator | Saturday 07 March 2026 00:50:21 +0000 (0:00:04.546) 0:00:58.729 ******** 2026-03-07 00:50:39.837356 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-03-07 00:50:39.837360 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:50:39.837364 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-03-07 00:50:39.837368 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:50:39.837372 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-03-07 00:50:39.837375 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:50:39.837379 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-03-07 00:50:39.837383 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-03-07 00:50:39.837387 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-03-07 00:50:39.837390 | orchestrator | 2026-03-07 00:50:39.837394 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-07 00:50:39.837406 | orchestrator | Saturday 07 March 2026 00:50:25 +0000 (0:00:04.824) 0:01:03.554 ******** 2026-03-07 00:50:39.837410 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:50:39.837414 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:50:39.837418 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:50:39.837421 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:50:39.837425 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:50:39.837429 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:50:39.837432 | orchestrator | 2026-03-07 00:50:39.837436 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:50:39.837445 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-07 00:50:39.837454 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-07 00:50:39.837458 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-07 00:50:39.837462 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-07 00:50:39.837466 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-07 00:50:39.837469 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-07 00:50:39.837473 | orchestrator | 2026-03-07 00:50:39.837477 | orchestrator | 2026-03-07 00:50:39.837481 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:50:39.837484 | orchestrator | Saturday 07 March 2026 00:50:36 +0000 (0:00:10.964) 0:01:14.518 ******** 2026-03-07 00:50:39.837488 | orchestrator | =============================================================================== 2026-03-07 00:50:39.837492 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 21.93s 2026-03-07 00:50:39.837496 | orchestrator | openvswitch : Set system-id, hostname and hw-offload ------------------- 10.09s 2026-03-07 00:50:39.837499 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 7.46s 2026-03-07 00:50:39.837503 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.82s 2026-03-07 00:50:39.837507 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 4.55s 2026-03-07 00:50:39.837511 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.56s 2026-03-07 00:50:39.837514 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 3.53s 2026-03-07 00:50:39.837518 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 3.02s 2026-03-07 00:50:39.837522 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.31s 2026-03-07 00:50:39.837525 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.03s 2026-03-07 00:50:39.837529 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.94s 2026-03-07 00:50:39.837533 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.67s 2026-03-07 00:50:39.837536 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.42s 2026-03-07 00:50:39.837540 | orchestrator | module-load : Load modules ---------------------------------------------- 1.32s 2026-03-07 00:50:39.837544 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.26s 2026-03-07 00:50:39.837547 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.21s 2026-03-07 00:50:39.837551 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.18s 2026-03-07 00:50:39.837618 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.87s 2026-03-07 00:50:39.837626 | orchestrator | 2026-03-07 00:50:39 | INFO  | Task 5f6e5475-889c-4983-a19f-0115efb73f51 is in state STARTED 2026-03-07 00:50:39.838634 | orchestrator | 2026-03-07 00:50:39 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:50:39.840720 | orchestrator | 2026-03-07 00:50:39 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:50:39.840752 | orchestrator | 2026-03-07 00:50:39 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:50:42.924070 | orchestrator | 2026-03-07 00:50:42 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:50:42.926183 | orchestrator | 2026-03-07 00:50:42 | INFO  | Task dcab45ef-8630-471e-94a6-64ca26753e28 is in state STARTED 2026-03-07 00:50:42.927181 | orchestrator | 2026-03-07 00:50:42 | INFO  | Task 5f6e5475-889c-4983-a19f-0115efb73f51 is in state STARTED 2026-03-07 00:50:42.930835 | orchestrator | 2026-03-07 00:50:42 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:50:42.931587 | orchestrator | 2026-03-07 00:50:42 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:50:42.931616 | orchestrator | 2026-03-07 00:50:42 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:50:45.972096 | orchestrator | 2026-03-07 00:50:45 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:50:45.976614 | orchestrator | 2026-03-07 00:50:45 | INFO  | Task dcab45ef-8630-471e-94a6-64ca26753e28 is in state STARTED 2026-03-07 00:50:45.978358 | orchestrator | 2026-03-07 00:50:45 | INFO  | Task 5f6e5475-889c-4983-a19f-0115efb73f51 is in state STARTED 2026-03-07 00:50:45.979494 | orchestrator | 2026-03-07 00:50:45 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:50:45.980092 | orchestrator | 2026-03-07 00:50:45 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:50:45.980133 | orchestrator | 2026-03-07 00:50:45 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:50:49.033640 | orchestrator | 2026-03-07 00:50:49 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:50:49.034289 | orchestrator | 2026-03-07 00:50:49 | INFO  | Task dcab45ef-8630-471e-94a6-64ca26753e28 is in state STARTED 2026-03-07 00:50:49.035142 | orchestrator | 2026-03-07 00:50:49 | INFO  | Task 5f6e5475-889c-4983-a19f-0115efb73f51 is in state STARTED 2026-03-07 00:50:49.035760 | orchestrator | 2026-03-07 00:50:49 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:50:49.037656 | orchestrator | 2026-03-07 00:50:49 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:50:49.037693 | orchestrator | 2026-03-07 00:50:49 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:50:52.076957 | orchestrator | 2026-03-07 00:50:52 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:50:52.077380 | orchestrator | 2026-03-07 00:50:52 | INFO  | Task dcab45ef-8630-471e-94a6-64ca26753e28 is in state STARTED 2026-03-07 00:50:52.079640 | orchestrator | 2026-03-07 00:50:52 | INFO  | Task 5f6e5475-889c-4983-a19f-0115efb73f51 is in state STARTED 2026-03-07 00:50:52.081746 | orchestrator | 2026-03-07 00:50:52 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:50:52.083233 | orchestrator | 2026-03-07 00:50:52 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:50:52.083297 | orchestrator | 2026-03-07 00:50:52 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:50:55.135019 | orchestrator | 2026-03-07 00:50:55 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:50:55.135112 | orchestrator | 2026-03-07 00:50:55 | INFO  | Task dcab45ef-8630-471e-94a6-64ca26753e28 is in state STARTED 2026-03-07 00:50:55.135120 | orchestrator | 2026-03-07 00:50:55 | INFO  | Task 5f6e5475-889c-4983-a19f-0115efb73f51 is in state STARTED 2026-03-07 00:50:55.135126 | orchestrator | 2026-03-07 00:50:55 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:50:55.135132 | orchestrator | 2026-03-07 00:50:55 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:50:55.135140 | orchestrator | 2026-03-07 00:50:55 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:50:58.171885 | orchestrator | 2026-03-07 00:50:58 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:50:58.172606 | orchestrator | 2026-03-07 00:50:58 | INFO  | Task dcab45ef-8630-471e-94a6-64ca26753e28 is in state STARTED 2026-03-07 00:50:58.175199 | orchestrator | 2026-03-07 00:50:58 | INFO  | Task 5f6e5475-889c-4983-a19f-0115efb73f51 is in state STARTED 2026-03-07 00:50:58.176703 | orchestrator | 2026-03-07 00:50:58 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:50:58.177977 | orchestrator | 2026-03-07 00:50:58 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:50:58.178083 | orchestrator | 2026-03-07 00:50:58 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:51:01.219009 | orchestrator | 2026-03-07 00:51:01 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:51:01.220161 | orchestrator | 2026-03-07 00:51:01 | INFO  | Task dcab45ef-8630-471e-94a6-64ca26753e28 is in state STARTED 2026-03-07 00:51:01.223554 | orchestrator | 2026-03-07 00:51:01 | INFO  | Task 5f6e5475-889c-4983-a19f-0115efb73f51 is in state STARTED 2026-03-07 00:51:01.226216 | orchestrator | 2026-03-07 00:51:01 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:51:01.228893 | orchestrator | 2026-03-07 00:51:01 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:51:01.229307 | orchestrator | 2026-03-07 00:51:01 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:51:04.295750 | orchestrator | 2026-03-07 00:51:04 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:51:04.296092 | orchestrator | 2026-03-07 00:51:04 | INFO  | Task dcab45ef-8630-471e-94a6-64ca26753e28 is in state STARTED 2026-03-07 00:51:04.297053 | orchestrator | 2026-03-07 00:51:04 | INFO  | Task 5f6e5475-889c-4983-a19f-0115efb73f51 is in state STARTED 2026-03-07 00:51:04.298404 | orchestrator | 2026-03-07 00:51:04 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:51:04.299001 | orchestrator | 2026-03-07 00:51:04 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:51:04.299038 | orchestrator | 2026-03-07 00:51:04 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:51:07.352135 | orchestrator | 2026-03-07 00:51:07 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:51:07.355551 | orchestrator | 2026-03-07 00:51:07 | INFO  | Task dcab45ef-8630-471e-94a6-64ca26753e28 is in state STARTED 2026-03-07 00:51:07.359046 | orchestrator | 2026-03-07 00:51:07 | INFO  | Task 5f6e5475-889c-4983-a19f-0115efb73f51 is in state STARTED 2026-03-07 00:51:07.363571 | orchestrator | 2026-03-07 00:51:07 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:51:07.366002 | orchestrator | 2026-03-07 00:51:07 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:51:07.366187 | orchestrator | 2026-03-07 00:51:07 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:51:10.424702 | orchestrator | 2026-03-07 00:51:10 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:51:10.424781 | orchestrator | 2026-03-07 00:51:10 | INFO  | Task dcab45ef-8630-471e-94a6-64ca26753e28 is in state STARTED 2026-03-07 00:51:10.425534 | orchestrator | 2026-03-07 00:51:10 | INFO  | Task 5f6e5475-889c-4983-a19f-0115efb73f51 is in state STARTED 2026-03-07 00:51:10.426368 | orchestrator | 2026-03-07 00:51:10 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:51:10.428328 | orchestrator | 2026-03-07 00:51:10 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:51:10.428359 | orchestrator | 2026-03-07 00:51:10 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:51:13.480659 | orchestrator | 2026-03-07 00:51:13 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:51:13.482331 | orchestrator | 2026-03-07 00:51:13 | INFO  | Task dcab45ef-8630-471e-94a6-64ca26753e28 is in state STARTED 2026-03-07 00:51:13.485619 | orchestrator | 2026-03-07 00:51:13 | INFO  | Task 5f6e5475-889c-4983-a19f-0115efb73f51 is in state STARTED 2026-03-07 00:51:13.486244 | orchestrator | 2026-03-07 00:51:13 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:51:13.487611 | orchestrator | 2026-03-07 00:51:13 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:51:13.487661 | orchestrator | 2026-03-07 00:51:13 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:51:16.573425 | orchestrator | 2026-03-07 00:51:16 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:51:16.573506 | orchestrator | 2026-03-07 00:51:16 | INFO  | Task dcab45ef-8630-471e-94a6-64ca26753e28 is in state STARTED 2026-03-07 00:51:16.573512 | orchestrator | 2026-03-07 00:51:16 | INFO  | Task 5f6e5475-889c-4983-a19f-0115efb73f51 is in state STARTED 2026-03-07 00:51:16.573517 | orchestrator | 2026-03-07 00:51:16 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:51:16.573521 | orchestrator | 2026-03-07 00:51:16 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:51:16.573525 | orchestrator | 2026-03-07 00:51:16 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:51:19.627391 | orchestrator | 2026-03-07 00:51:19 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:51:19.632683 | orchestrator | 2026-03-07 00:51:19 | INFO  | Task dcab45ef-8630-471e-94a6-64ca26753e28 is in state STARTED 2026-03-07 00:51:19.632743 | orchestrator | 2026-03-07 00:51:19 | INFO  | Task 5f6e5475-889c-4983-a19f-0115efb73f51 is in state STARTED 2026-03-07 00:51:19.635398 | orchestrator | 2026-03-07 00:51:19 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:51:19.637912 | orchestrator | 2026-03-07 00:51:19 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:51:19.637968 | orchestrator | 2026-03-07 00:51:19 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:51:22.702585 | orchestrator | 2026-03-07 00:51:22 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:51:22.705749 | orchestrator | 2026-03-07 00:51:22 | INFO  | Task dcab45ef-8630-471e-94a6-64ca26753e28 is in state STARTED 2026-03-07 00:51:22.707348 | orchestrator | 2026-03-07 00:51:22 | INFO  | Task 5f6e5475-889c-4983-a19f-0115efb73f51 is in state STARTED 2026-03-07 00:51:22.708035 | orchestrator | 2026-03-07 00:51:22 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:51:22.709159 | orchestrator | 2026-03-07 00:51:22 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:51:22.709223 | orchestrator | 2026-03-07 00:51:22 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:51:25.765427 | orchestrator | 2026-03-07 00:51:25 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:51:25.765529 | orchestrator | 2026-03-07 00:51:25 | INFO  | Task dcab45ef-8630-471e-94a6-64ca26753e28 is in state STARTED 2026-03-07 00:51:25.765883 | orchestrator | 2026-03-07 00:51:25 | INFO  | Task 5f6e5475-889c-4983-a19f-0115efb73f51 is in state STARTED 2026-03-07 00:51:25.767381 | orchestrator | 2026-03-07 00:51:25 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:51:25.768519 | orchestrator | 2026-03-07 00:51:25 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:51:25.768556 | orchestrator | 2026-03-07 00:51:25 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:51:28.834243 | orchestrator | 2026-03-07 00:51:28 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:51:28.834406 | orchestrator | 2026-03-07 00:51:28 | INFO  | Task dcab45ef-8630-471e-94a6-64ca26753e28 is in state STARTED 2026-03-07 00:51:28.834430 | orchestrator | 2026-03-07 00:51:28 | INFO  | Task 5f6e5475-889c-4983-a19f-0115efb73f51 is in state STARTED 2026-03-07 00:51:28.834449 | orchestrator | 2026-03-07 00:51:28 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:51:28.834467 | orchestrator | 2026-03-07 00:51:28 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:51:28.834484 | orchestrator | 2026-03-07 00:51:28 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:51:31.909326 | orchestrator | 2026-03-07 00:51:31 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:51:31.917997 | orchestrator | 2026-03-07 00:51:31 | INFO  | Task dcab45ef-8630-471e-94a6-64ca26753e28 is in state STARTED 2026-03-07 00:51:31.924232 | orchestrator | 2026-03-07 00:51:31 | INFO  | Task 5f6e5475-889c-4983-a19f-0115efb73f51 is in state STARTED 2026-03-07 00:51:31.929438 | orchestrator | 2026-03-07 00:51:31 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:51:31.936215 | orchestrator | 2026-03-07 00:51:31 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:51:31.937246 | orchestrator | 2026-03-07 00:51:31 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:51:35.049736 | orchestrator | 2026-03-07 00:51:35 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:51:35.053758 | orchestrator | 2026-03-07 00:51:35 | INFO  | Task dcab45ef-8630-471e-94a6-64ca26753e28 is in state STARTED 2026-03-07 00:51:35.064094 | orchestrator | 2026-03-07 00:51:35 | INFO  | Task 5f6e5475-889c-4983-a19f-0115efb73f51 is in state STARTED 2026-03-07 00:51:35.067692 | orchestrator | 2026-03-07 00:51:35 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:51:35.072093 | orchestrator | 2026-03-07 00:51:35 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:51:35.072156 | orchestrator | 2026-03-07 00:51:35 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:51:38.165063 | orchestrator | 2026-03-07 00:51:38 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:51:38.165221 | orchestrator | 2026-03-07 00:51:38 | INFO  | Task dcab45ef-8630-471e-94a6-64ca26753e28 is in state STARTED 2026-03-07 00:51:38.171187 | orchestrator | 2026-03-07 00:51:38 | INFO  | Task 5f6e5475-889c-4983-a19f-0115efb73f51 is in state STARTED 2026-03-07 00:51:38.196430 | orchestrator | 2026-03-07 00:51:38 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:51:38.208135 | orchestrator | 2026-03-07 00:51:38 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:51:38.208220 | orchestrator | 2026-03-07 00:51:38 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:51:41.854486 | orchestrator | 2026-03-07 00:51:41 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:51:41.854586 | orchestrator | 2026-03-07 00:51:41 | INFO  | Task dcab45ef-8630-471e-94a6-64ca26753e28 is in state STARTED 2026-03-07 00:51:41.854598 | orchestrator | 2026-03-07 00:51:41 | INFO  | Task 5f6e5475-889c-4983-a19f-0115efb73f51 is in state STARTED 2026-03-07 00:51:41.854605 | orchestrator | 2026-03-07 00:51:41 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:51:41.854613 | orchestrator | 2026-03-07 00:51:41 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:51:41.854621 | orchestrator | 2026-03-07 00:51:41 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:51:45.367583 | orchestrator | 2026-03-07 00:51:45 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:51:45.368257 | orchestrator | 2026-03-07 00:51:45 | INFO  | Task dcab45ef-8630-471e-94a6-64ca26753e28 is in state STARTED 2026-03-07 00:51:45.370892 | orchestrator | 2026-03-07 00:51:45 | INFO  | Task 5f6e5475-889c-4983-a19f-0115efb73f51 is in state STARTED 2026-03-07 00:51:45.371755 | orchestrator | 2026-03-07 00:51:45 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:51:45.373615 | orchestrator | 2026-03-07 00:51:45 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:51:45.373737 | orchestrator | 2026-03-07 00:51:45 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:51:48.513783 | orchestrator | 2026-03-07 00:51:48 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:51:48.513890 | orchestrator | 2026-03-07 00:51:48 | INFO  | Task dcab45ef-8630-471e-94a6-64ca26753e28 is in state STARTED 2026-03-07 00:51:48.513908 | orchestrator | 2026-03-07 00:51:48 | INFO  | Task 5f6e5475-889c-4983-a19f-0115efb73f51 is in state STARTED 2026-03-07 00:51:48.513920 | orchestrator | 2026-03-07 00:51:48 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:51:48.513931 | orchestrator | 2026-03-07 00:51:48 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:51:48.513943 | orchestrator | 2026-03-07 00:51:48 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:51:51.551346 | orchestrator | 2026-03-07 00:51:51 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:51:51.551689 | orchestrator | 2026-03-07 00:51:51 | INFO  | Task dcab45ef-8630-471e-94a6-64ca26753e28 is in state STARTED 2026-03-07 00:51:51.553094 | orchestrator | 2026-03-07 00:51:51 | INFO  | Task 5f6e5475-889c-4983-a19f-0115efb73f51 is in state STARTED 2026-03-07 00:51:51.554213 | orchestrator | 2026-03-07 00:51:51 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:51:51.555366 | orchestrator | 2026-03-07 00:51:51 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:51:51.555458 | orchestrator | 2026-03-07 00:51:51 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:51:54.602603 | orchestrator | 2026-03-07 00:51:54 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state STARTED 2026-03-07 00:51:54.604738 | orchestrator | 2026-03-07 00:51:54 | INFO  | Task dcab45ef-8630-471e-94a6-64ca26753e28 is in state STARTED 2026-03-07 00:51:54.606808 | orchestrator | 2026-03-07 00:51:54 | INFO  | Task 5f6e5475-889c-4983-a19f-0115efb73f51 is in state STARTED 2026-03-07 00:51:54.608844 | orchestrator | 2026-03-07 00:51:54 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:51:54.610242 | orchestrator | 2026-03-07 00:51:54 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:51:54.610272 | orchestrator | 2026-03-07 00:51:54 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:51:57.655288 | orchestrator | 2026-03-07 00:51:57 | INFO  | Task f137c63f-bf0c-449c-a6ac-635e9e35dc2b is in state SUCCESS 2026-03-07 00:51:57.657079 | orchestrator | 2026-03-07 00:51:57.657136 | orchestrator | 2026-03-07 00:51:57.657162 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-03-07 00:51:57.657181 | orchestrator | 2026-03-07 00:51:57.657199 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-03-07 00:51:57.657219 | orchestrator | Saturday 07 March 2026 00:46:45 +0000 (0:00:00.184) 0:00:00.184 ******** 2026-03-07 00:51:57.657239 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:51:57.657258 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:51:57.657350 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:51:57.657363 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:51:57.657648 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:51:57.657685 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:51:57.657700 | orchestrator | 2026-03-07 00:51:57.657714 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-03-07 00:51:57.657727 | orchestrator | Saturday 07 March 2026 00:46:46 +0000 (0:00:00.824) 0:00:01.008 ******** 2026-03-07 00:51:57.657741 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:51:57.657768 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:51:57.657782 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:51:57.657795 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:51:57.657807 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:51:57.657820 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:51:57.657833 | orchestrator | 2026-03-07 00:51:57.657845 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-03-07 00:51:57.657858 | orchestrator | Saturday 07 March 2026 00:46:46 +0000 (0:00:00.809) 0:00:01.818 ******** 2026-03-07 00:51:57.657871 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:51:57.658667 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:51:57.658697 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:51:57.658708 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:51:57.658719 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:51:57.658730 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:51:57.658741 | orchestrator | 2026-03-07 00:51:57.658752 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-03-07 00:51:57.658764 | orchestrator | Saturday 07 March 2026 00:46:47 +0000 (0:00:00.919) 0:00:02.737 ******** 2026-03-07 00:51:57.658776 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:51:57.658786 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:51:57.658797 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:51:57.658808 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:51:57.658818 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:51:57.658829 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:51:57.658839 | orchestrator | 2026-03-07 00:51:57.658850 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-03-07 00:51:57.658882 | orchestrator | Saturday 07 March 2026 00:46:50 +0000 (0:00:02.762) 0:00:05.500 ******** 2026-03-07 00:51:57.658893 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:51:57.658904 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:51:57.658915 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:51:57.658926 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:51:57.658937 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:51:57.658947 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:51:57.658958 | orchestrator | 2026-03-07 00:51:57.658969 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-03-07 00:51:57.658980 | orchestrator | Saturday 07 March 2026 00:46:52 +0000 (0:00:01.952) 0:00:07.452 ******** 2026-03-07 00:51:57.658991 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:51:57.659002 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:51:57.659012 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:51:57.659023 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:51:57.659034 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:51:57.659044 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:51:57.659055 | orchestrator | 2026-03-07 00:51:57.659066 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-03-07 00:51:57.659077 | orchestrator | Saturday 07 March 2026 00:46:53 +0000 (0:00:01.421) 0:00:08.874 ******** 2026-03-07 00:51:57.659088 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:51:57.659098 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:51:57.659109 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:51:57.659120 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:51:57.659131 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:51:57.659142 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:51:57.659153 | orchestrator | 2026-03-07 00:51:57.659163 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-03-07 00:51:57.659174 | orchestrator | Saturday 07 March 2026 00:46:55 +0000 (0:00:01.194) 0:00:10.069 ******** 2026-03-07 00:51:57.659185 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:51:57.659196 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:51:57.659207 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:51:57.659217 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:51:57.659228 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:51:57.659239 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:51:57.659249 | orchestrator | 2026-03-07 00:51:57.659260 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-03-07 00:51:57.659271 | orchestrator | Saturday 07 March 2026 00:46:55 +0000 (0:00:00.869) 0:00:10.938 ******** 2026-03-07 00:51:57.659282 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-07 00:51:57.659293 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-07 00:51:57.659304 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:51:57.659315 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-07 00:51:57.659326 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-07 00:51:57.659336 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:51:57.659348 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-07 00:51:57.659358 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-07 00:51:57.659369 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:51:57.659380 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-07 00:51:57.659435 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-07 00:51:57.659447 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:51:57.659458 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-07 00:51:57.659469 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-07 00:51:57.659488 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:51:57.659499 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-07 00:51:57.659510 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-07 00:51:57.659521 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:51:57.659532 | orchestrator | 2026-03-07 00:51:57.659542 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-03-07 00:51:57.659554 | orchestrator | Saturday 07 March 2026 00:46:56 +0000 (0:00:00.969) 0:00:11.908 ******** 2026-03-07 00:51:57.659564 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:51:57.659575 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:51:57.659586 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:51:57.659597 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:51:57.659608 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:51:57.659619 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:51:57.659630 | orchestrator | 2026-03-07 00:51:57.659641 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-03-07 00:51:57.659653 | orchestrator | Saturday 07 March 2026 00:46:58 +0000 (0:00:01.663) 0:00:13.571 ******** 2026-03-07 00:51:57.659664 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:51:57.659676 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:51:57.659687 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:51:57.659698 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:51:57.659709 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:51:57.659720 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:51:57.659731 | orchestrator | 2026-03-07 00:51:57.659742 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-03-07 00:51:57.659753 | orchestrator | Saturday 07 March 2026 00:46:59 +0000 (0:00:01.137) 0:00:14.709 ******** 2026-03-07 00:51:57.659764 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:51:57.659775 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:51:57.659786 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:51:57.659797 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:51:57.659807 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:51:57.659818 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:51:57.659829 | orchestrator | 2026-03-07 00:51:57.659840 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-03-07 00:51:57.659851 | orchestrator | Saturday 07 March 2026 00:47:04 +0000 (0:00:05.223) 0:00:19.933 ******** 2026-03-07 00:51:57.659862 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:51:57.659873 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:51:57.659883 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:51:57.659894 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:51:57.659905 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:51:57.659916 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:51:57.659927 | orchestrator | 2026-03-07 00:51:57.659938 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-03-07 00:51:57.659949 | orchestrator | Saturday 07 March 2026 00:47:06 +0000 (0:00:01.770) 0:00:21.704 ******** 2026-03-07 00:51:57.659960 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:51:57.659971 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:51:57.659981 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:51:57.659992 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:51:57.660003 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:51:57.660013 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:51:57.660024 | orchestrator | 2026-03-07 00:51:57.660036 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-03-07 00:51:57.660049 | orchestrator | Saturday 07 March 2026 00:47:10 +0000 (0:00:03.276) 0:00:24.980 ******** 2026-03-07 00:51:57.660060 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:51:57.660071 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:51:57.660082 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:51:57.660100 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:51:57.660111 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:51:57.660121 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:51:57.660132 | orchestrator | 2026-03-07 00:51:57.660143 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-03-07 00:51:57.660154 | orchestrator | Saturday 07 March 2026 00:47:11 +0000 (0:00:01.111) 0:00:26.091 ******** 2026-03-07 00:51:57.660166 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-03-07 00:51:57.660178 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-03-07 00:51:57.660189 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:51:57.660200 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-03-07 00:51:57.660210 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-03-07 00:51:57.660222 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:51:57.660232 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-03-07 00:51:57.660243 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-03-07 00:51:57.660254 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:51:57.660265 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-03-07 00:51:57.660276 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-03-07 00:51:57.660287 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:51:57.660298 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-03-07 00:51:57.660308 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-03-07 00:51:57.660319 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:51:57.660330 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-03-07 00:51:57.660341 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-03-07 00:51:57.660352 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:51:57.660363 | orchestrator | 2026-03-07 00:51:57.660375 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-03-07 00:51:57.660428 | orchestrator | Saturday 07 March 2026 00:47:13 +0000 (0:00:01.878) 0:00:27.970 ******** 2026-03-07 00:51:57.660441 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:51:57.660452 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:51:57.660463 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:51:57.660474 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:51:57.660485 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:51:57.660496 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:51:57.660507 | orchestrator | 2026-03-07 00:51:57.660517 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-03-07 00:51:57.660565 | orchestrator | Saturday 07 March 2026 00:47:14 +0000 (0:00:01.200) 0:00:29.170 ******** 2026-03-07 00:51:57.660577 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:51:57.660588 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:51:57.660599 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:51:57.660610 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:51:57.660621 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:51:57.660631 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:51:57.660642 | orchestrator | 2026-03-07 00:51:57.660653 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-03-07 00:51:57.660663 | orchestrator | 2026-03-07 00:51:57.660674 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-03-07 00:51:57.660685 | orchestrator | Saturday 07 March 2026 00:47:16 +0000 (0:00:02.298) 0:00:31.468 ******** 2026-03-07 00:51:57.660696 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:51:57.660707 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:51:57.660718 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:51:57.660729 | orchestrator | 2026-03-07 00:51:57.660739 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-03-07 00:51:57.660750 | orchestrator | Saturday 07 March 2026 00:47:18 +0000 (0:00:01.645) 0:00:33.114 ******** 2026-03-07 00:51:57.660769 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:51:57.660781 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:51:57.660791 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:51:57.660802 | orchestrator | 2026-03-07 00:51:57.660813 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-03-07 00:51:57.660824 | orchestrator | Saturday 07 March 2026 00:47:19 +0000 (0:00:01.348) 0:00:34.463 ******** 2026-03-07 00:51:57.660834 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:51:57.660845 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:51:57.660855 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:51:57.660866 | orchestrator | 2026-03-07 00:51:57.660877 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-03-07 00:51:57.660888 | orchestrator | Saturday 07 March 2026 00:47:20 +0000 (0:00:01.273) 0:00:35.736 ******** 2026-03-07 00:51:57.660898 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:51:57.660909 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:51:57.660919 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:51:57.660930 | orchestrator | 2026-03-07 00:51:57.660940 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-03-07 00:51:57.660951 | orchestrator | Saturday 07 March 2026 00:47:21 +0000 (0:00:00.980) 0:00:36.717 ******** 2026-03-07 00:51:57.660962 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:51:57.660973 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:51:57.660983 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:51:57.660994 | orchestrator | 2026-03-07 00:51:57.661005 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-03-07 00:51:57.661015 | orchestrator | Saturday 07 March 2026 00:47:22 +0000 (0:00:00.885) 0:00:37.602 ******** 2026-03-07 00:51:57.661026 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:51:57.661037 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:51:57.661048 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:51:57.661059 | orchestrator | 2026-03-07 00:51:57.661069 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-03-07 00:51:57.661080 | orchestrator | Saturday 07 March 2026 00:47:23 +0000 (0:00:00.842) 0:00:38.445 ******** 2026-03-07 00:51:57.661091 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:51:57.661102 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:51:57.661113 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:51:57.661123 | orchestrator | 2026-03-07 00:51:57.661134 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-03-07 00:51:57.661145 | orchestrator | Saturday 07 March 2026 00:47:25 +0000 (0:00:01.852) 0:00:40.297 ******** 2026-03-07 00:51:57.661156 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:51:57.661167 | orchestrator | 2026-03-07 00:51:57.661178 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-03-07 00:51:57.661189 | orchestrator | Saturday 07 March 2026 00:47:26 +0000 (0:00:00.719) 0:00:41.017 ******** 2026-03-07 00:51:57.661200 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:51:57.661210 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:51:57.661221 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:51:57.661231 | orchestrator | 2026-03-07 00:51:57.661242 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-03-07 00:51:57.661253 | orchestrator | Saturday 07 March 2026 00:47:30 +0000 (0:00:04.896) 0:00:45.913 ******** 2026-03-07 00:51:57.661264 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:51:57.661274 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:51:57.661285 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:51:57.661296 | orchestrator | 2026-03-07 00:51:57.661306 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-03-07 00:51:57.661317 | orchestrator | Saturday 07 March 2026 00:47:31 +0000 (0:00:00.760) 0:00:46.674 ******** 2026-03-07 00:51:57.661328 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:51:57.661339 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:51:57.661350 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:51:57.661368 | orchestrator | 2026-03-07 00:51:57.661379 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-03-07 00:51:57.661444 | orchestrator | Saturday 07 March 2026 00:47:32 +0000 (0:00:01.175) 0:00:47.849 ******** 2026-03-07 00:51:57.661455 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:51:57.661466 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:51:57.661477 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:51:57.661487 | orchestrator | 2026-03-07 00:51:57.661498 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-03-07 00:51:57.661524 | orchestrator | Saturday 07 March 2026 00:47:35 +0000 (0:00:02.223) 0:00:50.073 ******** 2026-03-07 00:51:57.661535 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:51:57.661546 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:51:57.661557 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:51:57.661568 | orchestrator | 2026-03-07 00:51:57.661578 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-03-07 00:51:57.661588 | orchestrator | Saturday 07 March 2026 00:47:35 +0000 (0:00:00.530) 0:00:50.603 ******** 2026-03-07 00:51:57.661597 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:51:57.661607 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:51:57.661616 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:51:57.661626 | orchestrator | 2026-03-07 00:51:57.661635 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-03-07 00:51:57.661645 | orchestrator | Saturday 07 March 2026 00:47:36 +0000 (0:00:00.592) 0:00:51.196 ******** 2026-03-07 00:51:57.661654 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:51:57.661664 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:51:57.661673 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:51:57.661683 | orchestrator | 2026-03-07 00:51:57.661692 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-03-07 00:51:57.661702 | orchestrator | Saturday 07 March 2026 00:47:38 +0000 (0:00:02.253) 0:00:53.449 ******** 2026-03-07 00:51:57.661712 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:51:57.661721 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:51:57.661731 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:51:57.661741 | orchestrator | 2026-03-07 00:51:57.661750 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-03-07 00:51:57.661760 | orchestrator | Saturday 07 March 2026 00:47:41 +0000 (0:00:03.427) 0:00:56.877 ******** 2026-03-07 00:51:57.661769 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:51:57.661779 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:51:57.661788 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:51:57.661798 | orchestrator | 2026-03-07 00:51:57.661808 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-03-07 00:51:57.661818 | orchestrator | Saturday 07 March 2026 00:47:42 +0000 (0:00:01.053) 0:00:57.930 ******** 2026-03-07 00:51:57.661828 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-07 00:51:57.661838 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-07 00:51:57.661848 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-07 00:51:57.661858 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-07 00:51:57.661868 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-07 00:51:57.661878 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-07 00:51:57.661887 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-07 00:51:57.661904 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-07 00:51:57.661914 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-07 00:51:57.661924 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-07 00:51:57.661933 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-07 00:51:57.661943 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-07 00:51:57.661952 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:51:57.661962 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:51:57.661971 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:51:57.661981 | orchestrator | 2026-03-07 00:51:57.661991 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-03-07 00:51:57.662000 | orchestrator | Saturday 07 March 2026 00:48:26 +0000 (0:00:43.834) 0:01:41.765 ******** 2026-03-07 00:51:57.662010 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:51:57.662054 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:51:57.662064 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:51:57.662074 | orchestrator | 2026-03-07 00:51:57.662083 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-03-07 00:51:57.662093 | orchestrator | Saturday 07 March 2026 00:48:27 +0000 (0:00:00.577) 0:01:42.343 ******** 2026-03-07 00:51:57.662103 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:51:57.662112 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:51:57.662122 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:51:57.662131 | orchestrator | 2026-03-07 00:51:57.662141 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-03-07 00:51:57.662151 | orchestrator | Saturday 07 March 2026 00:48:28 +0000 (0:00:01.369) 0:01:43.712 ******** 2026-03-07 00:51:57.662161 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:51:57.662170 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:51:57.662180 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:51:57.662190 | orchestrator | 2026-03-07 00:51:57.662210 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-03-07 00:51:57.662221 | orchestrator | Saturday 07 March 2026 00:48:30 +0000 (0:00:02.130) 0:01:45.842 ******** 2026-03-07 00:51:57.662230 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:51:57.662240 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:51:57.662250 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:51:57.662259 | orchestrator | 2026-03-07 00:51:57.662269 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-03-07 00:51:57.662279 | orchestrator | Saturday 07 March 2026 00:48:58 +0000 (0:00:27.325) 0:02:13.167 ******** 2026-03-07 00:51:57.662288 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:51:57.662298 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:51:57.662308 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:51:57.662317 | orchestrator | 2026-03-07 00:51:57.662327 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-03-07 00:51:57.662336 | orchestrator | Saturday 07 March 2026 00:48:58 +0000 (0:00:00.646) 0:02:13.814 ******** 2026-03-07 00:51:57.662346 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:51:57.662356 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:51:57.662365 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:51:57.662375 | orchestrator | 2026-03-07 00:51:57.662399 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-03-07 00:51:57.662409 | orchestrator | Saturday 07 March 2026 00:48:59 +0000 (0:00:00.708) 0:02:14.522 ******** 2026-03-07 00:51:57.662419 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:51:57.662440 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:51:57.662449 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:51:57.662459 | orchestrator | 2026-03-07 00:51:57.662469 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-03-07 00:51:57.662479 | orchestrator | Saturday 07 March 2026 00:49:00 +0000 (0:00:00.607) 0:02:15.129 ******** 2026-03-07 00:51:57.662489 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:51:57.662498 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:51:57.662508 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:51:57.662517 | orchestrator | 2026-03-07 00:51:57.662527 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-03-07 00:51:57.662537 | orchestrator | Saturday 07 March 2026 00:49:01 +0000 (0:00:01.496) 0:02:16.627 ******** 2026-03-07 00:51:57.662547 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:51:57.662556 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:51:57.662566 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:51:57.662575 | orchestrator | 2026-03-07 00:51:57.662585 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-03-07 00:51:57.662595 | orchestrator | Saturday 07 March 2026 00:49:02 +0000 (0:00:00.551) 0:02:17.178 ******** 2026-03-07 00:51:57.662604 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:51:57.662614 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:51:57.662624 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:51:57.662634 | orchestrator | 2026-03-07 00:51:57.662643 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-03-07 00:51:57.662653 | orchestrator | Saturday 07 March 2026 00:49:03 +0000 (0:00:00.896) 0:02:18.074 ******** 2026-03-07 00:51:57.662663 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:51:57.662673 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:51:57.662683 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:51:57.662693 | orchestrator | 2026-03-07 00:51:57.662704 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-03-07 00:51:57.662721 | orchestrator | Saturday 07 March 2026 00:49:03 +0000 (0:00:00.823) 0:02:18.897 ******** 2026-03-07 00:51:57.662737 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:51:57.662753 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:51:57.662767 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:51:57.662784 | orchestrator | 2026-03-07 00:51:57.662799 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-03-07 00:51:57.662816 | orchestrator | Saturday 07 March 2026 00:49:05 +0000 (0:00:01.629) 0:02:20.526 ******** 2026-03-07 00:51:57.662838 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:51:57.662859 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:51:57.662874 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:51:57.662889 | orchestrator | 2026-03-07 00:51:57.662904 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-03-07 00:51:57.662918 | orchestrator | Saturday 07 March 2026 00:49:06 +0000 (0:00:01.108) 0:02:21.634 ******** 2026-03-07 00:51:57.662931 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:51:57.662946 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:51:57.662962 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:51:57.662977 | orchestrator | 2026-03-07 00:51:57.662994 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-03-07 00:51:57.663011 | orchestrator | Saturday 07 March 2026 00:49:06 +0000 (0:00:00.319) 0:02:21.954 ******** 2026-03-07 00:51:57.663027 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:51:57.663042 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:51:57.663054 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:51:57.663071 | orchestrator | 2026-03-07 00:51:57.663087 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-03-07 00:51:57.663103 | orchestrator | Saturday 07 March 2026 00:49:07 +0000 (0:00:00.381) 0:02:22.336 ******** 2026-03-07 00:51:57.663119 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:51:57.663133 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:51:57.663147 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:51:57.663175 | orchestrator | 2026-03-07 00:51:57.663192 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-03-07 00:51:57.663208 | orchestrator | Saturday 07 March 2026 00:49:08 +0000 (0:00:01.027) 0:02:23.364 ******** 2026-03-07 00:51:57.663225 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:51:57.663241 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:51:57.663258 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:51:57.663274 | orchestrator | 2026-03-07 00:51:57.663289 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-03-07 00:51:57.663300 | orchestrator | Saturday 07 March 2026 00:49:09 +0000 (0:00:00.714) 0:02:24.078 ******** 2026-03-07 00:51:57.663310 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-07 00:51:57.663336 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-07 00:51:57.663347 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-07 00:51:57.663357 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-07 00:51:57.663367 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-07 00:51:57.663377 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-07 00:51:57.663440 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-07 00:51:57.663451 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-07 00:51:57.663461 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-07 00:51:57.663471 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-03-07 00:51:57.663480 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-07 00:51:57.663490 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-07 00:51:57.663499 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-03-07 00:51:57.663509 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-07 00:51:57.663519 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-07 00:51:57.663528 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-07 00:51:57.663538 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-07 00:51:57.663548 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-07 00:51:57.663558 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-07 00:51:57.663589 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-07 00:51:57.663600 | orchestrator | 2026-03-07 00:51:57.663609 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-03-07 00:51:57.663619 | orchestrator | 2026-03-07 00:51:57.663629 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-03-07 00:51:57.663639 | orchestrator | Saturday 07 March 2026 00:49:12 +0000 (0:00:03.428) 0:02:27.507 ******** 2026-03-07 00:51:57.663648 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:51:57.663658 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:51:57.663667 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:51:57.663677 | orchestrator | 2026-03-07 00:51:57.663687 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-03-07 00:51:57.663696 | orchestrator | Saturday 07 March 2026 00:49:13 +0000 (0:00:00.693) 0:02:28.201 ******** 2026-03-07 00:51:57.663715 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:51:57.663725 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:51:57.663734 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:51:57.663743 | orchestrator | 2026-03-07 00:51:57.663753 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-03-07 00:51:57.663763 | orchestrator | Saturday 07 March 2026 00:49:14 +0000 (0:00:01.516) 0:02:29.717 ******** 2026-03-07 00:51:57.663772 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:51:57.663782 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:51:57.663791 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:51:57.663801 | orchestrator | 2026-03-07 00:51:57.663810 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-03-07 00:51:57.663820 | orchestrator | Saturday 07 March 2026 00:49:15 +0000 (0:00:00.360) 0:02:30.077 ******** 2026-03-07 00:51:57.663830 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:51:57.663840 | orchestrator | 2026-03-07 00:51:57.663850 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-03-07 00:51:57.663859 | orchestrator | Saturday 07 March 2026 00:49:16 +0000 (0:00:00.914) 0:02:30.992 ******** 2026-03-07 00:51:57.663869 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:51:57.663878 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:51:57.663888 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:51:57.663898 | orchestrator | 2026-03-07 00:51:57.663907 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-03-07 00:51:57.663916 | orchestrator | Saturday 07 March 2026 00:49:16 +0000 (0:00:00.350) 0:02:31.342 ******** 2026-03-07 00:51:57.663924 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:51:57.663932 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:51:57.663940 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:51:57.663947 | orchestrator | 2026-03-07 00:51:57.663955 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-03-07 00:51:57.663963 | orchestrator | Saturday 07 March 2026 00:49:16 +0000 (0:00:00.355) 0:02:31.698 ******** 2026-03-07 00:51:57.663971 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:51:57.663979 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:51:57.663986 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:51:57.663994 | orchestrator | 2026-03-07 00:51:57.664002 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-03-07 00:51:57.664010 | orchestrator | Saturday 07 March 2026 00:49:17 +0000 (0:00:00.388) 0:02:32.087 ******** 2026-03-07 00:51:57.664018 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:51:57.664026 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:51:57.664038 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:51:57.664046 | orchestrator | 2026-03-07 00:51:57.664060 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-03-07 00:51:57.664069 | orchestrator | Saturday 07 March 2026 00:49:18 +0000 (0:00:01.074) 0:02:33.161 ******** 2026-03-07 00:51:57.664077 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:51:57.664085 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:51:57.664093 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:51:57.664101 | orchestrator | 2026-03-07 00:51:57.664109 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-03-07 00:51:57.664117 | orchestrator | Saturday 07 March 2026 00:49:19 +0000 (0:00:01.244) 0:02:34.405 ******** 2026-03-07 00:51:57.664125 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:51:57.664132 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:51:57.664140 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:51:57.664148 | orchestrator | 2026-03-07 00:51:57.664156 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-03-07 00:51:57.664164 | orchestrator | Saturday 07 March 2026 00:49:20 +0000 (0:00:01.410) 0:02:35.816 ******** 2026-03-07 00:51:57.664172 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:51:57.664185 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:51:57.664193 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:51:57.664201 | orchestrator | 2026-03-07 00:51:57.664209 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-07 00:51:57.664217 | orchestrator | 2026-03-07 00:51:57.664225 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-07 00:51:57.664233 | orchestrator | Saturday 07 March 2026 00:49:31 +0000 (0:00:10.405) 0:02:46.221 ******** 2026-03-07 00:51:57.664240 | orchestrator | ok: [testbed-manager] 2026-03-07 00:51:57.664248 | orchestrator | 2026-03-07 00:51:57.664256 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-07 00:51:57.664264 | orchestrator | Saturday 07 March 2026 00:49:32 +0000 (0:00:01.165) 0:02:47.387 ******** 2026-03-07 00:51:57.664272 | orchestrator | changed: [testbed-manager] 2026-03-07 00:51:57.664280 | orchestrator | 2026-03-07 00:51:57.664288 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-07 00:51:57.664295 | orchestrator | Saturday 07 March 2026 00:49:32 +0000 (0:00:00.544) 0:02:47.931 ******** 2026-03-07 00:51:57.664303 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-07 00:51:57.664311 | orchestrator | 2026-03-07 00:51:57.664319 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-07 00:51:57.664327 | orchestrator | Saturday 07 March 2026 00:49:33 +0000 (0:00:00.671) 0:02:48.603 ******** 2026-03-07 00:51:57.664335 | orchestrator | changed: [testbed-manager] 2026-03-07 00:51:57.664343 | orchestrator | 2026-03-07 00:51:57.664351 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-07 00:51:57.664359 | orchestrator | Saturday 07 March 2026 00:49:34 +0000 (0:00:00.916) 0:02:49.520 ******** 2026-03-07 00:51:57.664366 | orchestrator | changed: [testbed-manager] 2026-03-07 00:51:57.664374 | orchestrator | 2026-03-07 00:51:57.664398 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-07 00:51:57.664407 | orchestrator | Saturday 07 March 2026 00:49:35 +0000 (0:00:00.680) 0:02:50.200 ******** 2026-03-07 00:51:57.664415 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-07 00:51:57.664422 | orchestrator | 2026-03-07 00:51:57.664430 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-07 00:51:57.664438 | orchestrator | Saturday 07 March 2026 00:49:37 +0000 (0:00:01.876) 0:02:52.077 ******** 2026-03-07 00:51:57.664446 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-07 00:51:57.664464 | orchestrator | 2026-03-07 00:51:57.664472 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-07 00:51:57.664480 | orchestrator | Saturday 07 March 2026 00:49:38 +0000 (0:00:00.966) 0:02:53.043 ******** 2026-03-07 00:51:57.664487 | orchestrator | changed: [testbed-manager] 2026-03-07 00:51:57.664495 | orchestrator | 2026-03-07 00:51:57.664503 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-07 00:51:57.664511 | orchestrator | Saturday 07 March 2026 00:49:38 +0000 (0:00:00.777) 0:02:53.820 ******** 2026-03-07 00:51:57.664519 | orchestrator | changed: [testbed-manager] 2026-03-07 00:51:57.664527 | orchestrator | 2026-03-07 00:51:57.664535 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-03-07 00:51:57.664542 | orchestrator | 2026-03-07 00:51:57.664550 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-03-07 00:51:57.664558 | orchestrator | Saturday 07 March 2026 00:49:39 +0000 (0:00:00.435) 0:02:54.255 ******** 2026-03-07 00:51:57.664566 | orchestrator | ok: [testbed-manager] 2026-03-07 00:51:57.664574 | orchestrator | 2026-03-07 00:51:57.664581 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-03-07 00:51:57.664589 | orchestrator | Saturday 07 March 2026 00:49:39 +0000 (0:00:00.170) 0:02:54.426 ******** 2026-03-07 00:51:57.664597 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-03-07 00:51:57.664605 | orchestrator | 2026-03-07 00:51:57.664613 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-03-07 00:51:57.664626 | orchestrator | Saturday 07 March 2026 00:49:39 +0000 (0:00:00.260) 0:02:54.687 ******** 2026-03-07 00:51:57.664634 | orchestrator | ok: [testbed-manager] 2026-03-07 00:51:57.664641 | orchestrator | 2026-03-07 00:51:57.664649 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-03-07 00:51:57.664657 | orchestrator | Saturday 07 March 2026 00:49:41 +0000 (0:00:01.523) 0:02:56.210 ******** 2026-03-07 00:51:57.664665 | orchestrator | ok: [testbed-manager] 2026-03-07 00:51:57.664673 | orchestrator | 2026-03-07 00:51:57.664681 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-03-07 00:51:57.664688 | orchestrator | Saturday 07 March 2026 00:49:42 +0000 (0:00:01.724) 0:02:57.935 ******** 2026-03-07 00:51:57.664696 | orchestrator | changed: [testbed-manager] 2026-03-07 00:51:57.664704 | orchestrator | 2026-03-07 00:51:57.664712 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-03-07 00:51:57.664720 | orchestrator | Saturday 07 March 2026 00:49:43 +0000 (0:00:00.880) 0:02:58.815 ******** 2026-03-07 00:51:57.664732 | orchestrator | ok: [testbed-manager] 2026-03-07 00:51:57.664740 | orchestrator | 2026-03-07 00:51:57.664753 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-03-07 00:51:57.664761 | orchestrator | Saturday 07 March 2026 00:49:44 +0000 (0:00:00.519) 0:02:59.335 ******** 2026-03-07 00:51:57.664769 | orchestrator | changed: [testbed-manager] 2026-03-07 00:51:57.664777 | orchestrator | 2026-03-07 00:51:57.664785 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-03-07 00:51:57.664793 | orchestrator | Saturday 07 March 2026 00:49:56 +0000 (0:00:11.794) 0:03:11.129 ******** 2026-03-07 00:51:57.664800 | orchestrator | changed: [testbed-manager] 2026-03-07 00:51:57.664808 | orchestrator | 2026-03-07 00:51:57.664816 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-03-07 00:51:57.664824 | orchestrator | Saturday 07 March 2026 00:50:14 +0000 (0:00:18.802) 0:03:29.931 ******** 2026-03-07 00:51:57.664832 | orchestrator | ok: [testbed-manager] 2026-03-07 00:51:57.664839 | orchestrator | 2026-03-07 00:51:57.664847 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-03-07 00:51:57.664855 | orchestrator | 2026-03-07 00:51:57.664863 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-03-07 00:51:57.664870 | orchestrator | Saturday 07 March 2026 00:50:16 +0000 (0:00:01.151) 0:03:31.082 ******** 2026-03-07 00:51:57.664878 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:51:57.664886 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:51:57.664894 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:51:57.664902 | orchestrator | 2026-03-07 00:51:57.664910 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-03-07 00:51:57.664917 | orchestrator | Saturday 07 March 2026 00:50:16 +0000 (0:00:00.428) 0:03:31.510 ******** 2026-03-07 00:51:57.664925 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:51:57.664933 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:51:57.664941 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:51:57.664948 | orchestrator | 2026-03-07 00:51:57.664956 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-03-07 00:51:57.664964 | orchestrator | Saturday 07 March 2026 00:50:17 +0000 (0:00:00.574) 0:03:32.085 ******** 2026-03-07 00:51:57.664972 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:51:57.664980 | orchestrator | 2026-03-07 00:51:57.664987 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-03-07 00:51:57.664995 | orchestrator | Saturday 07 March 2026 00:50:18 +0000 (0:00:01.440) 0:03:33.525 ******** 2026-03-07 00:51:57.665003 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-07 00:51:57.665010 | orchestrator | 2026-03-07 00:51:57.665018 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-03-07 00:51:57.665026 | orchestrator | Saturday 07 March 2026 00:50:19 +0000 (0:00:01.307) 0:03:34.833 ******** 2026-03-07 00:51:57.665039 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-07 00:51:57.665047 | orchestrator | 2026-03-07 00:51:57.665055 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-03-07 00:51:57.665063 | orchestrator | Saturday 07 March 2026 00:50:21 +0000 (0:00:01.282) 0:03:36.116 ******** 2026-03-07 00:51:57.665071 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:51:57.665078 | orchestrator | 2026-03-07 00:51:57.665086 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-03-07 00:51:57.665094 | orchestrator | Saturday 07 March 2026 00:50:21 +0000 (0:00:00.229) 0:03:36.345 ******** 2026-03-07 00:51:57.665102 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-07 00:51:57.665110 | orchestrator | 2026-03-07 00:51:57.665117 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-03-07 00:51:57.665125 | orchestrator | Saturday 07 March 2026 00:50:22 +0000 (0:00:01.578) 0:03:37.924 ******** 2026-03-07 00:51:57.665133 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:51:57.665141 | orchestrator | 2026-03-07 00:51:57.665149 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-03-07 00:51:57.665156 | orchestrator | Saturday 07 March 2026 00:50:23 +0000 (0:00:00.227) 0:03:38.151 ******** 2026-03-07 00:51:57.665164 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:51:57.665172 | orchestrator | 2026-03-07 00:51:57.665180 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-03-07 00:51:57.665188 | orchestrator | Saturday 07 March 2026 00:50:23 +0000 (0:00:00.169) 0:03:38.321 ******** 2026-03-07 00:51:57.665195 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:51:57.665203 | orchestrator | 2026-03-07 00:51:57.665211 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-03-07 00:51:57.665219 | orchestrator | Saturday 07 March 2026 00:50:23 +0000 (0:00:00.186) 0:03:38.508 ******** 2026-03-07 00:51:57.665227 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:51:57.665234 | orchestrator | 2026-03-07 00:51:57.665242 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-03-07 00:51:57.665250 | orchestrator | Saturday 07 March 2026 00:50:23 +0000 (0:00:00.143) 0:03:38.651 ******** 2026-03-07 00:51:57.665258 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-07 00:51:57.665265 | orchestrator | 2026-03-07 00:51:57.665273 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-03-07 00:51:57.665281 | orchestrator | Saturday 07 March 2026 00:50:30 +0000 (0:00:06.305) 0:03:44.957 ******** 2026-03-07 00:51:57.665289 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-03-07 00:51:57.665296 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-03-07 00:51:57.665305 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-03-07 00:51:57.665312 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-03-07 00:51:57.665320 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-03-07 00:51:57.665328 | orchestrator | 2026-03-07 00:51:57.665336 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-03-07 00:51:57.665348 | orchestrator | Saturday 07 March 2026 00:51:17 +0000 (0:00:47.049) 0:04:32.007 ******** 2026-03-07 00:51:57.665361 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-07 00:51:57.665370 | orchestrator | 2026-03-07 00:51:57.665377 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-03-07 00:51:57.665400 | orchestrator | Saturday 07 March 2026 00:51:18 +0000 (0:00:01.723) 0:04:33.731 ******** 2026-03-07 00:51:57.665408 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-07 00:51:57.665415 | orchestrator | 2026-03-07 00:51:57.665423 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-03-07 00:51:57.665431 | orchestrator | Saturday 07 March 2026 00:51:21 +0000 (0:00:02.372) 0:04:36.103 ******** 2026-03-07 00:51:57.665439 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-07 00:51:57.665452 | orchestrator | 2026-03-07 00:51:57.665460 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-03-07 00:51:57.665468 | orchestrator | Saturday 07 March 2026 00:51:22 +0000 (0:00:01.429) 0:04:37.533 ******** 2026-03-07 00:51:57.665476 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:51:57.665484 | orchestrator | 2026-03-07 00:51:57.665492 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-03-07 00:51:57.665500 | orchestrator | Saturday 07 March 2026 00:51:22 +0000 (0:00:00.224) 0:04:37.757 ******** 2026-03-07 00:51:57.665507 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-03-07 00:51:57.665515 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-03-07 00:51:57.665523 | orchestrator | 2026-03-07 00:51:57.665531 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-03-07 00:51:57.665539 | orchestrator | Saturday 07 March 2026 00:51:25 +0000 (0:00:02.976) 0:04:40.733 ******** 2026-03-07 00:51:57.665547 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:51:57.665555 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:51:57.665562 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:51:57.665570 | orchestrator | 2026-03-07 00:51:57.665578 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-03-07 00:51:57.665586 | orchestrator | Saturday 07 March 2026 00:51:26 +0000 (0:00:00.375) 0:04:41.109 ******** 2026-03-07 00:51:57.665594 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:51:57.665601 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:51:57.665609 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:51:57.665617 | orchestrator | 2026-03-07 00:51:57.665625 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-03-07 00:51:57.665633 | orchestrator | 2026-03-07 00:51:57.665641 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-03-07 00:51:57.665649 | orchestrator | Saturday 07 March 2026 00:51:27 +0000 (0:00:01.108) 0:04:42.217 ******** 2026-03-07 00:51:57.665656 | orchestrator | ok: [testbed-manager] 2026-03-07 00:51:57.665664 | orchestrator | 2026-03-07 00:51:57.665672 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-03-07 00:51:57.665680 | orchestrator | Saturday 07 March 2026 00:51:27 +0000 (0:00:00.162) 0:04:42.379 ******** 2026-03-07 00:51:57.665688 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-03-07 00:51:57.665695 | orchestrator | 2026-03-07 00:51:57.665703 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-03-07 00:51:57.665711 | orchestrator | Saturday 07 March 2026 00:51:27 +0000 (0:00:00.222) 0:04:42.602 ******** 2026-03-07 00:51:57.665719 | orchestrator | changed: [testbed-manager] 2026-03-07 00:51:57.665726 | orchestrator | 2026-03-07 00:51:57.665734 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-03-07 00:51:57.665742 | orchestrator | 2026-03-07 00:51:57.665750 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-03-07 00:51:57.665757 | orchestrator | Saturday 07 March 2026 00:51:35 +0000 (0:00:07.929) 0:04:50.531 ******** 2026-03-07 00:51:57.665765 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:51:57.665773 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:51:57.665781 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:51:57.665789 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:51:57.665796 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:51:57.665804 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:51:57.665812 | orchestrator | 2026-03-07 00:51:57.665820 | orchestrator | TASK [Manage labels] *********************************************************** 2026-03-07 00:51:57.665827 | orchestrator | Saturday 07 March 2026 00:51:36 +0000 (0:00:01.004) 0:04:51.536 ******** 2026-03-07 00:51:57.665836 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-07 00:51:57.665843 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-07 00:51:57.665856 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-07 00:51:57.665864 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-07 00:51:57.665872 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-07 00:51:57.665880 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-07 00:51:57.665887 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-07 00:51:57.665895 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-07 00:51:57.665903 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-07 00:51:57.665911 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-07 00:51:57.665919 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-07 00:51:57.665926 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-07 00:51:57.665943 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-07 00:51:57.665952 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-07 00:51:57.665959 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-07 00:51:57.665967 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-07 00:51:57.665975 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-07 00:51:57.665983 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-07 00:51:57.665999 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-07 00:51:57.666007 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-07 00:51:57.666057 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-07 00:51:57.666067 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-07 00:51:57.666075 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-07 00:51:57.666083 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-07 00:51:57.666090 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-07 00:51:57.666098 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-07 00:51:57.666106 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-07 00:51:57.666114 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-07 00:51:57.666122 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-07 00:51:57.666129 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-07 00:51:57.666137 | orchestrator | 2026-03-07 00:51:57.666145 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-03-07 00:51:57.666152 | orchestrator | Saturday 07 March 2026 00:51:55 +0000 (0:00:18.628) 0:05:10.165 ******** 2026-03-07 00:51:57.666160 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:51:57.666168 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:51:57.666176 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:51:57.666183 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:51:57.666191 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:51:57.666199 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:51:57.666207 | orchestrator | 2026-03-07 00:51:57.666221 | orchestrator | TASK [Manage taints] *********************************************************** 2026-03-07 00:51:57.666229 | orchestrator | Saturday 07 March 2026 00:51:56 +0000 (0:00:00.892) 0:05:11.058 ******** 2026-03-07 00:51:57.666237 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:51:57.666245 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:51:57.666252 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:51:57.666260 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:51:57.666268 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:51:57.666276 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:51:57.666283 | orchestrator | 2026-03-07 00:51:57.666291 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:51:57.666299 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:51:57.666309 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-07 00:51:57.666323 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-07 00:51:57.666336 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-07 00:51:57.666350 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-07 00:51:57.666361 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-07 00:51:57.666427 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-07 00:51:57.666445 | orchestrator | 2026-03-07 00:51:57.666457 | orchestrator | 2026-03-07 00:51:57.666470 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:51:57.666482 | orchestrator | Saturday 07 March 2026 00:51:56 +0000 (0:00:00.483) 0:05:11.541 ******** 2026-03-07 00:51:57.666495 | orchestrator | =============================================================================== 2026-03-07 00:51:57.666507 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 47.05s 2026-03-07 00:51:57.666520 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 43.83s 2026-03-07 00:51:57.666532 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 27.33s 2026-03-07 00:51:57.666562 | orchestrator | kubectl : Install required packages ------------------------------------ 18.80s 2026-03-07 00:51:57.666577 | orchestrator | Manage labels ---------------------------------------------------------- 18.63s 2026-03-07 00:51:57.666590 | orchestrator | kubectl : Add repository Debian ---------------------------------------- 11.79s 2026-03-07 00:51:57.666603 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.41s 2026-03-07 00:51:57.666617 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 7.93s 2026-03-07 00:51:57.666625 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 6.31s 2026-03-07 00:51:57.666633 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.22s 2026-03-07 00:51:57.666641 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 4.90s 2026-03-07 00:51:57.666649 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.43s 2026-03-07 00:51:57.666657 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 3.43s 2026-03-07 00:51:57.666665 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 3.28s 2026-03-07 00:51:57.666685 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.98s 2026-03-07 00:51:57.666693 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.76s 2026-03-07 00:51:57.666701 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 2.37s 2026-03-07 00:51:57.666709 | orchestrator | k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured --- 2.30s 2026-03-07 00:51:57.666717 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.25s 2026-03-07 00:51:57.666724 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 2.22s 2026-03-07 00:51:57.666732 | orchestrator | 2026-03-07 00:51:57 | INFO  | Task dcab45ef-8630-471e-94a6-64ca26753e28 is in state STARTED 2026-03-07 00:51:57.666741 | orchestrator | 2026-03-07 00:51:57 | INFO  | Task 5f6e5475-889c-4983-a19f-0115efb73f51 is in state STARTED 2026-03-07 00:51:57.666748 | orchestrator | 2026-03-07 00:51:57 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:51:57.666756 | orchestrator | 2026-03-07 00:51:57 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:51:57.666764 | orchestrator | 2026-03-07 00:51:57 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:52:00.713848 | orchestrator | 2026-03-07 00:52:00 | INFO  | Task dcab45ef-8630-471e-94a6-64ca26753e28 is in state STARTED 2026-03-07 00:52:00.715544 | orchestrator | 2026-03-07 00:52:00 | INFO  | Task 66532a73-c561-4d66-be97-0f26381cbd62 is in state STARTED 2026-03-07 00:52:00.716261 | orchestrator | 2026-03-07 00:52:00 | INFO  | Task 5f6e5475-889c-4983-a19f-0115efb73f51 is in state STARTED 2026-03-07 00:52:00.717360 | orchestrator | 2026-03-07 00:52:00 | INFO  | Task 52303d06-2304-4018-90bd-b14aba7292da is in state STARTED 2026-03-07 00:52:00.718344 | orchestrator | 2026-03-07 00:52:00 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:52:00.719331 | orchestrator | 2026-03-07 00:52:00 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:52:00.719365 | orchestrator | 2026-03-07 00:52:00 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:52:03.753370 | orchestrator | 2026-03-07 00:52:03 | INFO  | Task dcab45ef-8630-471e-94a6-64ca26753e28 is in state STARTED 2026-03-07 00:52:03.753724 | orchestrator | 2026-03-07 00:52:03 | INFO  | Task 66532a73-c561-4d66-be97-0f26381cbd62 is in state STARTED 2026-03-07 00:52:03.754076 | orchestrator | 2026-03-07 00:52:03 | INFO  | Task 5f6e5475-889c-4983-a19f-0115efb73f51 is in state STARTED 2026-03-07 00:52:03.754950 | orchestrator | 2026-03-07 00:52:03 | INFO  | Task 52303d06-2304-4018-90bd-b14aba7292da is in state STARTED 2026-03-07 00:52:03.755666 | orchestrator | 2026-03-07 00:52:03 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:52:03.756658 | orchestrator | 2026-03-07 00:52:03 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:52:03.756703 | orchestrator | 2026-03-07 00:52:03 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:52:06.816536 | orchestrator | 2026-03-07 00:52:06 | INFO  | Task dcab45ef-8630-471e-94a6-64ca26753e28 is in state STARTED 2026-03-07 00:52:06.818231 | orchestrator | 2026-03-07 00:52:06 | INFO  | Task 66532a73-c561-4d66-be97-0f26381cbd62 is in state SUCCESS 2026-03-07 00:52:06.818293 | orchestrator | 2026-03-07 00:52:06 | INFO  | Task 5f6e5475-889c-4983-a19f-0115efb73f51 is in state STARTED 2026-03-07 00:52:06.818795 | orchestrator | 2026-03-07 00:52:06 | INFO  | Task 52303d06-2304-4018-90bd-b14aba7292da is in state STARTED 2026-03-07 00:52:06.819724 | orchestrator | 2026-03-07 00:52:06 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:52:06.820533 | orchestrator | 2026-03-07 00:52:06 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:52:06.820728 | orchestrator | 2026-03-07 00:52:06 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:52:09.864789 | orchestrator | 2026-03-07 00:52:09 | INFO  | Task dcab45ef-8630-471e-94a6-64ca26753e28 is in state STARTED 2026-03-07 00:52:09.865746 | orchestrator | 2026-03-07 00:52:09 | INFO  | Task 5f6e5475-889c-4983-a19f-0115efb73f51 is in state STARTED 2026-03-07 00:52:09.867912 | orchestrator | 2026-03-07 00:52:09 | INFO  | Task 52303d06-2304-4018-90bd-b14aba7292da is in state STARTED 2026-03-07 00:52:09.868806 | orchestrator | 2026-03-07 00:52:09 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:52:09.869759 | orchestrator | 2026-03-07 00:52:09 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:52:09.873009 | orchestrator | 2026-03-07 00:52:09 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:52:12.914188 | orchestrator | 2026-03-07 00:52:12 | INFO  | Task dcab45ef-8630-471e-94a6-64ca26753e28 is in state STARTED 2026-03-07 00:52:12.914479 | orchestrator | 2026-03-07 00:52:12 | INFO  | Task 5f6e5475-889c-4983-a19f-0115efb73f51 is in state STARTED 2026-03-07 00:52:12.914749 | orchestrator | 2026-03-07 00:52:12 | INFO  | Task 52303d06-2304-4018-90bd-b14aba7292da is in state SUCCESS 2026-03-07 00:52:12.915627 | orchestrator | 2026-03-07 00:52:12 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:52:12.916383 | orchestrator | 2026-03-07 00:52:12 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:52:12.916435 | orchestrator | 2026-03-07 00:52:12 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:52:15.959343 | orchestrator | 2026-03-07 00:52:15 | INFO  | Task dcab45ef-8630-471e-94a6-64ca26753e28 is in state STARTED 2026-03-07 00:52:15.959486 | orchestrator | 2026-03-07 00:52:15 | INFO  | Task 5f6e5475-889c-4983-a19f-0115efb73f51 is in state STARTED 2026-03-07 00:52:15.960682 | orchestrator | 2026-03-07 00:52:15 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:52:15.961878 | orchestrator | 2026-03-07 00:52:15 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:52:15.961900 | orchestrator | 2026-03-07 00:52:15 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:52:18.996062 | orchestrator | 2026-03-07 00:52:18 | INFO  | Task dcab45ef-8630-471e-94a6-64ca26753e28 is in state STARTED 2026-03-07 00:52:18.997511 | orchestrator | 2026-03-07 00:52:18 | INFO  | Task 5f6e5475-889c-4983-a19f-0115efb73f51 is in state STARTED 2026-03-07 00:52:18.998069 | orchestrator | 2026-03-07 00:52:18 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:52:18.999111 | orchestrator | 2026-03-07 00:52:18 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:52:18.999191 | orchestrator | 2026-03-07 00:52:18 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:52:22.037304 | orchestrator | 2026-03-07 00:52:22 | INFO  | Task dcab45ef-8630-471e-94a6-64ca26753e28 is in state STARTED 2026-03-07 00:52:22.037582 | orchestrator | 2026-03-07 00:52:22 | INFO  | Task 5f6e5475-889c-4983-a19f-0115efb73f51 is in state STARTED 2026-03-07 00:52:22.038687 | orchestrator | 2026-03-07 00:52:22 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:52:22.039670 | orchestrator | 2026-03-07 00:52:22 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:52:22.039742 | orchestrator | 2026-03-07 00:52:22 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:52:25.121019 | orchestrator | 2026-03-07 00:52:25 | INFO  | Task dcab45ef-8630-471e-94a6-64ca26753e28 is in state STARTED 2026-03-07 00:52:25.122893 | orchestrator | 2026-03-07 00:52:25 | INFO  | Task 5f6e5475-889c-4983-a19f-0115efb73f51 is in state STARTED 2026-03-07 00:52:25.124509 | orchestrator | 2026-03-07 00:52:25 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:52:25.125897 | orchestrator | 2026-03-07 00:52:25 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:52:25.125938 | orchestrator | 2026-03-07 00:52:25 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:52:28.174853 | orchestrator | 2026-03-07 00:52:28 | INFO  | Task dcab45ef-8630-471e-94a6-64ca26753e28 is in state STARTED 2026-03-07 00:52:28.175978 | orchestrator | 2026-03-07 00:52:28 | INFO  | Task 5f6e5475-889c-4983-a19f-0115efb73f51 is in state STARTED 2026-03-07 00:52:28.179129 | orchestrator | 2026-03-07 00:52:28 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:52:28.181645 | orchestrator | 2026-03-07 00:52:28 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:52:28.181701 | orchestrator | 2026-03-07 00:52:28 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:52:31.239323 | orchestrator | 2026-03-07 00:52:31 | INFO  | Task dcab45ef-8630-471e-94a6-64ca26753e28 is in state STARTED 2026-03-07 00:52:31.240210 | orchestrator | 2026-03-07 00:52:31 | INFO  | Task 5f6e5475-889c-4983-a19f-0115efb73f51 is in state SUCCESS 2026-03-07 00:52:31.241632 | orchestrator | 2026-03-07 00:52:31.241685 | orchestrator | 2026-03-07 00:52:31.241703 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-03-07 00:52:31.241718 | orchestrator | 2026-03-07 00:52:31.241732 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-07 00:52:31.241747 | orchestrator | Saturday 07 March 2026 00:52:02 +0000 (0:00:00.212) 0:00:00.212 ******** 2026-03-07 00:52:31.241761 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-07 00:52:31.241775 | orchestrator | 2026-03-07 00:52:31.241787 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-07 00:52:31.241801 | orchestrator | Saturday 07 March 2026 00:52:03 +0000 (0:00:00.944) 0:00:01.156 ******** 2026-03-07 00:52:31.241814 | orchestrator | changed: [testbed-manager] 2026-03-07 00:52:31.241827 | orchestrator | 2026-03-07 00:52:31.241839 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-03-07 00:52:31.241852 | orchestrator | Saturday 07 March 2026 00:52:04 +0000 (0:00:01.346) 0:00:02.503 ******** 2026-03-07 00:52:31.241878 | orchestrator | changed: [testbed-manager] 2026-03-07 00:52:31.241892 | orchestrator | 2026-03-07 00:52:31.241906 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:52:31.241921 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:52:31.241937 | orchestrator | 2026-03-07 00:52:31.241951 | orchestrator | 2026-03-07 00:52:31.241964 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:52:31.241976 | orchestrator | Saturday 07 March 2026 00:52:05 +0000 (0:00:00.554) 0:00:03.057 ******** 2026-03-07 00:52:31.241985 | orchestrator | =============================================================================== 2026-03-07 00:52:31.241993 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.35s 2026-03-07 00:52:31.242001 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.94s 2026-03-07 00:52:31.242009 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.55s 2026-03-07 00:52:31.242092 | orchestrator | 2026-03-07 00:52:31.242102 | orchestrator | 2026-03-07 00:52:31.242110 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-07 00:52:31.242118 | orchestrator | 2026-03-07 00:52:31.242126 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-07 00:52:31.242140 | orchestrator | Saturday 07 March 2026 00:52:02 +0000 (0:00:00.186) 0:00:00.186 ******** 2026-03-07 00:52:31.242155 | orchestrator | ok: [testbed-manager] 2026-03-07 00:52:31.242170 | orchestrator | 2026-03-07 00:52:31.242184 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-07 00:52:31.242199 | orchestrator | Saturday 07 March 2026 00:52:02 +0000 (0:00:00.699) 0:00:00.885 ******** 2026-03-07 00:52:31.242213 | orchestrator | ok: [testbed-manager] 2026-03-07 00:52:31.242228 | orchestrator | 2026-03-07 00:52:31.242241 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-07 00:52:31.242253 | orchestrator | Saturday 07 March 2026 00:52:03 +0000 (0:00:00.757) 0:00:01.643 ******** 2026-03-07 00:52:31.242261 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-07 00:52:31.242274 | orchestrator | 2026-03-07 00:52:31.242288 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-07 00:52:31.242302 | orchestrator | Saturday 07 March 2026 00:52:04 +0000 (0:00:00.802) 0:00:02.445 ******** 2026-03-07 00:52:31.242316 | orchestrator | changed: [testbed-manager] 2026-03-07 00:52:31.242330 | orchestrator | 2026-03-07 00:52:31.242343 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-07 00:52:31.242357 | orchestrator | Saturday 07 March 2026 00:52:06 +0000 (0:00:01.986) 0:00:04.432 ******** 2026-03-07 00:52:31.242365 | orchestrator | changed: [testbed-manager] 2026-03-07 00:52:31.242373 | orchestrator | 2026-03-07 00:52:31.242381 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-07 00:52:31.242389 | orchestrator | Saturday 07 March 2026 00:52:06 +0000 (0:00:00.656) 0:00:05.089 ******** 2026-03-07 00:52:31.242397 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-07 00:52:31.242404 | orchestrator | 2026-03-07 00:52:31.242412 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-07 00:52:31.242420 | orchestrator | Saturday 07 March 2026 00:52:08 +0000 (0:00:01.729) 0:00:06.819 ******** 2026-03-07 00:52:31.242428 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-07 00:52:31.242436 | orchestrator | 2026-03-07 00:52:31.242445 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-07 00:52:31.242488 | orchestrator | Saturday 07 March 2026 00:52:09 +0000 (0:00:00.958) 0:00:07.778 ******** 2026-03-07 00:52:31.242497 | orchestrator | ok: [testbed-manager] 2026-03-07 00:52:31.242505 | orchestrator | 2026-03-07 00:52:31.242513 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-07 00:52:31.242520 | orchestrator | Saturday 07 March 2026 00:52:10 +0000 (0:00:00.486) 0:00:08.265 ******** 2026-03-07 00:52:31.242529 | orchestrator | ok: [testbed-manager] 2026-03-07 00:52:31.242595 | orchestrator | 2026-03-07 00:52:31.242615 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:52:31.242764 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:52:31.242795 | orchestrator | 2026-03-07 00:52:31.242810 | orchestrator | 2026-03-07 00:52:31.242823 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:52:31.242837 | orchestrator | Saturday 07 March 2026 00:52:10 +0000 (0:00:00.360) 0:00:08.626 ******** 2026-03-07 00:52:31.242849 | orchestrator | =============================================================================== 2026-03-07 00:52:31.242857 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.99s 2026-03-07 00:52:31.242865 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.73s 2026-03-07 00:52:31.242873 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.96s 2026-03-07 00:52:31.242908 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.80s 2026-03-07 00:52:31.242917 | orchestrator | Create .kube directory -------------------------------------------------- 0.76s 2026-03-07 00:52:31.242925 | orchestrator | Get home directory of operator user ------------------------------------- 0.70s 2026-03-07 00:52:31.242933 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.66s 2026-03-07 00:52:31.242941 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.49s 2026-03-07 00:52:31.242949 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.36s 2026-03-07 00:52:31.242957 | orchestrator | 2026-03-07 00:52:31.242965 | orchestrator | 2026-03-07 00:52:31.242973 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-03-07 00:52:31.242981 | orchestrator | 2026-03-07 00:52:31.242989 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-03-07 00:52:31.242997 | orchestrator | Saturday 07 March 2026 00:49:43 +0000 (0:00:00.308) 0:00:00.308 ******** 2026-03-07 00:52:31.243005 | orchestrator | ok: [localhost] => { 2026-03-07 00:52:31.243014 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-03-07 00:52:31.243023 | orchestrator | } 2026-03-07 00:52:31.243031 | orchestrator | 2026-03-07 00:52:31.243039 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-03-07 00:52:31.243047 | orchestrator | Saturday 07 March 2026 00:49:43 +0000 (0:00:00.110) 0:00:00.419 ******** 2026-03-07 00:52:31.243056 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-03-07 00:52:31.243066 | orchestrator | ...ignoring 2026-03-07 00:52:31.243087 | orchestrator | 2026-03-07 00:52:31.243096 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-03-07 00:52:31.243104 | orchestrator | Saturday 07 March 2026 00:49:46 +0000 (0:00:03.082) 0:00:03.501 ******** 2026-03-07 00:52:31.243112 | orchestrator | skipping: [localhost] 2026-03-07 00:52:31.243120 | orchestrator | 2026-03-07 00:52:31.243128 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-03-07 00:52:31.243136 | orchestrator | Saturday 07 March 2026 00:49:46 +0000 (0:00:00.161) 0:00:03.663 ******** 2026-03-07 00:52:31.243144 | orchestrator | ok: [localhost] 2026-03-07 00:52:31.243152 | orchestrator | 2026-03-07 00:52:31.243159 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-07 00:52:31.243167 | orchestrator | 2026-03-07 00:52:31.243175 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-07 00:52:31.243183 | orchestrator | Saturday 07 March 2026 00:49:47 +0000 (0:00:00.524) 0:00:04.187 ******** 2026-03-07 00:52:31.243191 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:52:31.243199 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:52:31.243207 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:52:31.243215 | orchestrator | 2026-03-07 00:52:31.243223 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-07 00:52:31.243231 | orchestrator | Saturday 07 March 2026 00:49:48 +0000 (0:00:01.731) 0:00:05.918 ******** 2026-03-07 00:52:31.243239 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-03-07 00:52:31.243248 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-03-07 00:52:31.243255 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-03-07 00:52:31.243263 | orchestrator | 2026-03-07 00:52:31.243271 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-03-07 00:52:31.243279 | orchestrator | 2026-03-07 00:52:31.243287 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-07 00:52:31.243295 | orchestrator | Saturday 07 March 2026 00:49:50 +0000 (0:00:02.104) 0:00:08.023 ******** 2026-03-07 00:52:31.243303 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:52:31.243311 | orchestrator | 2026-03-07 00:52:31.243324 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-07 00:52:31.243332 | orchestrator | Saturday 07 March 2026 00:49:51 +0000 (0:00:00.853) 0:00:08.876 ******** 2026-03-07 00:52:31.243340 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:52:31.243348 | orchestrator | 2026-03-07 00:52:31.243356 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-03-07 00:52:31.243363 | orchestrator | Saturday 07 March 2026 00:49:53 +0000 (0:00:01.496) 0:00:10.373 ******** 2026-03-07 00:52:31.243371 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:52:31.243379 | orchestrator | 2026-03-07 00:52:31.243387 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-03-07 00:52:31.243395 | orchestrator | Saturday 07 March 2026 00:49:54 +0000 (0:00:00.771) 0:00:11.144 ******** 2026-03-07 00:52:31.243403 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:52:31.243411 | orchestrator | 2026-03-07 00:52:31.243425 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-03-07 00:52:31.243433 | orchestrator | Saturday 07 March 2026 00:49:54 +0000 (0:00:00.847) 0:00:11.992 ******** 2026-03-07 00:52:31.243441 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:52:31.243471 | orchestrator | 2026-03-07 00:52:31.243479 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-03-07 00:52:31.243487 | orchestrator | Saturday 07 March 2026 00:49:55 +0000 (0:00:00.607) 0:00:12.599 ******** 2026-03-07 00:52:31.243495 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:52:31.243503 | orchestrator | 2026-03-07 00:52:31.243511 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-07 00:52:31.243519 | orchestrator | Saturday 07 March 2026 00:49:57 +0000 (0:00:02.200) 0:00:14.800 ******** 2026-03-07 00:52:31.243527 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-1, testbed-node-0, testbed-node-2 2026-03-07 00:52:31.243535 | orchestrator | 2026-03-07 00:52:31.243542 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-07 00:52:31.243556 | orchestrator | Saturday 07 March 2026 00:49:59 +0000 (0:00:01.970) 0:00:16.771 ******** 2026-03-07 00:52:31.243564 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:52:31.243585 | orchestrator | 2026-03-07 00:52:31.243593 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-03-07 00:52:31.243601 | orchestrator | Saturday 07 March 2026 00:50:00 +0000 (0:00:00.984) 0:00:17.755 ******** 2026-03-07 00:52:31.243609 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:52:31.243616 | orchestrator | 2026-03-07 00:52:31.243624 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-03-07 00:52:31.243632 | orchestrator | Saturday 07 March 2026 00:50:01 +0000 (0:00:00.481) 0:00:18.237 ******** 2026-03-07 00:52:31.243640 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:52:31.243647 | orchestrator | 2026-03-07 00:52:31.243660 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-03-07 00:52:31.243674 | orchestrator | Saturday 07 March 2026 00:50:01 +0000 (0:00:00.466) 0:00:18.703 ******** 2026-03-07 00:52:31.243692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-07 00:52:31.243721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-07 00:52:31.243745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-07 00:52:31.243762 | orchestrator | 2026-03-07 00:52:31.243776 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-03-07 00:52:31.243790 | orchestrator | Saturday 07 March 2026 00:50:02 +0000 (0:00:01.049) 0:00:19.753 ******** 2026-03-07 00:52:31.243816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-07 00:52:31.243830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-07 00:52:31.243856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-07 00:52:31.243872 | orchestrator | 2026-03-07 00:52:31.243886 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-03-07 00:52:31.243900 | orchestrator | Saturday 07 March 2026 00:50:05 +0000 (0:00:02.790) 0:00:22.544 ******** 2026-03-07 00:52:31.243920 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-07 00:52:31.243935 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-07 00:52:31.243948 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-07 00:52:31.243963 | orchestrator | 2026-03-07 00:52:31.243976 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-03-07 00:52:31.243990 | orchestrator | Saturday 07 March 2026 00:50:09 +0000 (0:00:04.290) 0:00:26.834 ******** 2026-03-07 00:52:31.244002 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-07 00:52:31.244010 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-07 00:52:31.244018 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-07 00:52:31.244026 | orchestrator | 2026-03-07 00:52:31.244040 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-03-07 00:52:31.244048 | orchestrator | Saturday 07 March 2026 00:50:13 +0000 (0:00:04.003) 0:00:30.837 ******** 2026-03-07 00:52:31.244056 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-07 00:52:31.244064 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-07 00:52:31.244071 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-07 00:52:31.244079 | orchestrator | 2026-03-07 00:52:31.244087 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-03-07 00:52:31.244095 | orchestrator | Saturday 07 March 2026 00:50:16 +0000 (0:00:02.737) 0:00:33.575 ******** 2026-03-07 00:52:31.244103 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-07 00:52:31.244118 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-07 00:52:31.244126 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-07 00:52:31.244134 | orchestrator | 2026-03-07 00:52:31.244142 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-03-07 00:52:31.244150 | orchestrator | Saturday 07 March 2026 00:50:22 +0000 (0:00:05.550) 0:00:39.125 ******** 2026-03-07 00:52:31.244158 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-07 00:52:31.244166 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-07 00:52:31.244174 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-07 00:52:31.244182 | orchestrator | 2026-03-07 00:52:31.244189 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-03-07 00:52:31.244197 | orchestrator | Saturday 07 March 2026 00:50:24 +0000 (0:00:02.604) 0:00:41.729 ******** 2026-03-07 00:52:31.244205 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-07 00:52:31.244213 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-07 00:52:31.244221 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-07 00:52:31.244229 | orchestrator | 2026-03-07 00:52:31.244237 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-07 00:52:31.244244 | orchestrator | Saturday 07 March 2026 00:50:27 +0000 (0:00:02.525) 0:00:44.255 ******** 2026-03-07 00:52:31.244252 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:52:31.244260 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:52:31.244268 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:52:31.244276 | orchestrator | 2026-03-07 00:52:31.244284 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-03-07 00:52:31.244291 | orchestrator | Saturday 07 March 2026 00:50:29 +0000 (0:00:02.056) 0:00:46.312 ******** 2026-03-07 00:52:31.244300 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-07 00:52:31.244322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-07 00:52:31.244337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-07 00:52:31.244346 | orchestrator | 2026-03-07 00:52:31.244354 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-03-07 00:52:31.244362 | orchestrator | Saturday 07 March 2026 00:50:31 +0000 (0:00:02.600) 0:00:48.912 ******** 2026-03-07 00:52:31.244370 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:52:31.244377 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:52:31.244385 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:52:31.244393 | orchestrator | 2026-03-07 00:52:31.244401 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-03-07 00:52:31.244409 | orchestrator | Saturday 07 March 2026 00:50:32 +0000 (0:00:01.019) 0:00:49.932 ******** 2026-03-07 00:52:31.244417 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:52:31.244425 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:52:31.244433 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:52:31.244441 | orchestrator | 2026-03-07 00:52:31.244496 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-03-07 00:52:31.244505 | orchestrator | Saturday 07 March 2026 00:50:40 +0000 (0:00:08.028) 0:00:57.961 ******** 2026-03-07 00:52:31.244513 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:52:31.244521 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:52:31.244528 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:52:31.244536 | orchestrator | 2026-03-07 00:52:31.244544 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-07 00:52:31.244552 | orchestrator | 2026-03-07 00:52:31.244560 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-07 00:52:31.244568 | orchestrator | Saturday 07 March 2026 00:50:41 +0000 (0:00:00.874) 0:00:58.835 ******** 2026-03-07 00:52:31.244576 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:52:31.244583 | orchestrator | 2026-03-07 00:52:31.244591 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-07 00:52:31.244599 | orchestrator | Saturday 07 March 2026 00:50:42 +0000 (0:00:00.827) 0:00:59.663 ******** 2026-03-07 00:52:31.244607 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:52:31.244615 | orchestrator | 2026-03-07 00:52:31.244623 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-07 00:52:31.244631 | orchestrator | Saturday 07 March 2026 00:50:43 +0000 (0:00:00.464) 0:01:00.128 ******** 2026-03-07 00:52:31.244638 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:52:31.244646 | orchestrator | 2026-03-07 00:52:31.244654 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-07 00:52:31.244662 | orchestrator | Saturday 07 March 2026 00:50:45 +0000 (0:00:02.206) 0:01:02.335 ******** 2026-03-07 00:52:31.244676 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:52:31.244684 | orchestrator | 2026-03-07 00:52:31.244692 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-07 00:52:31.244699 | orchestrator | 2026-03-07 00:52:31.244713 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-07 00:52:31.244726 | orchestrator | Saturday 07 March 2026 00:51:43 +0000 (0:00:57.860) 0:02:00.195 ******** 2026-03-07 00:52:31.244738 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:52:31.244751 | orchestrator | 2026-03-07 00:52:31.244772 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-07 00:52:31.244785 | orchestrator | Saturday 07 March 2026 00:51:43 +0000 (0:00:00.794) 0:02:00.990 ******** 2026-03-07 00:52:31.244798 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:52:31.244811 | orchestrator | 2026-03-07 00:52:31.244823 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-07 00:52:31.244836 | orchestrator | Saturday 07 March 2026 00:51:44 +0000 (0:00:00.352) 0:02:01.342 ******** 2026-03-07 00:52:31.244844 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:52:31.244852 | orchestrator | 2026-03-07 00:52:31.244860 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-07 00:52:31.244868 | orchestrator | Saturday 07 March 2026 00:51:47 +0000 (0:00:03.055) 0:02:04.398 ******** 2026-03-07 00:52:31.244876 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:52:31.244884 | orchestrator | 2026-03-07 00:52:31.244892 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-07 00:52:31.244900 | orchestrator | 2026-03-07 00:52:31.244908 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-07 00:52:31.244922 | orchestrator | Saturday 07 March 2026 00:52:05 +0000 (0:00:18.101) 0:02:22.500 ******** 2026-03-07 00:52:31.244930 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:52:31.244938 | orchestrator | 2026-03-07 00:52:31.244946 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-07 00:52:31.244954 | orchestrator | Saturday 07 March 2026 00:52:06 +0000 (0:00:00.670) 0:02:23.171 ******** 2026-03-07 00:52:31.244962 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:52:31.244970 | orchestrator | 2026-03-07 00:52:31.244978 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-07 00:52:31.244985 | orchestrator | Saturday 07 March 2026 00:52:06 +0000 (0:00:00.258) 0:02:23.430 ******** 2026-03-07 00:52:31.244993 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:52:31.245001 | orchestrator | 2026-03-07 00:52:31.245009 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-07 00:52:31.245017 | orchestrator | Saturday 07 March 2026 00:52:13 +0000 (0:00:07.040) 0:02:30.470 ******** 2026-03-07 00:52:31.245025 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:52:31.245033 | orchestrator | 2026-03-07 00:52:31.245041 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-03-07 00:52:31.245049 | orchestrator | 2026-03-07 00:52:31.245057 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-03-07 00:52:31.245065 | orchestrator | Saturday 07 March 2026 00:52:26 +0000 (0:00:13.254) 0:02:43.725 ******** 2026-03-07 00:52:31.245073 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:52:31.245080 | orchestrator | 2026-03-07 00:52:31.245088 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-03-07 00:52:31.245096 | orchestrator | Saturday 07 March 2026 00:52:27 +0000 (0:00:00.711) 0:02:44.437 ******** 2026-03-07 00:52:31.245104 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-07 00:52:31.245112 | orchestrator | enable_outward_rabbitmq_True 2026-03-07 00:52:31.245120 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-07 00:52:31.245128 | orchestrator | outward_rabbitmq_restart 2026-03-07 00:52:31.245136 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:52:31.245144 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:52:31.245158 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:52:31.245166 | orchestrator | 2026-03-07 00:52:31.245174 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-03-07 00:52:31.245182 | orchestrator | skipping: no hosts matched 2026-03-07 00:52:31.245190 | orchestrator | 2026-03-07 00:52:31.245198 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-03-07 00:52:31.245206 | orchestrator | skipping: no hosts matched 2026-03-07 00:52:31.245214 | orchestrator | 2026-03-07 00:52:31.245222 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-03-07 00:52:31.245229 | orchestrator | skipping: no hosts matched 2026-03-07 00:52:31.245237 | orchestrator | 2026-03-07 00:52:31.245245 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:52:31.245254 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-07 00:52:31.245263 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-07 00:52:31.245271 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:52:31.245279 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 00:52:31.245287 | orchestrator | 2026-03-07 00:52:31.245295 | orchestrator | 2026-03-07 00:52:31.245303 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:52:31.245311 | orchestrator | Saturday 07 March 2026 00:52:30 +0000 (0:00:02.907) 0:02:47.345 ******** 2026-03-07 00:52:31.245319 | orchestrator | =============================================================================== 2026-03-07 00:52:31.245327 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 89.22s 2026-03-07 00:52:31.245335 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 12.30s 2026-03-07 00:52:31.245343 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 8.03s 2026-03-07 00:52:31.245350 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 5.55s 2026-03-07 00:52:31.245358 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 4.29s 2026-03-07 00:52:31.245366 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 4.00s 2026-03-07 00:52:31.245382 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.08s 2026-03-07 00:52:31.245390 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.91s 2026-03-07 00:52:31.245398 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.79s 2026-03-07 00:52:31.245405 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 2.74s 2026-03-07 00:52:31.245413 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.60s 2026-03-07 00:52:31.245421 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 2.60s 2026-03-07 00:52:31.245429 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.53s 2026-03-07 00:52:31.245437 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.29s 2026-03-07 00:52:31.245445 | orchestrator | rabbitmq : Catch when RabbitMQ is being downgraded ---------------------- 2.20s 2026-03-07 00:52:31.245475 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.10s 2026-03-07 00:52:31.245484 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 2.06s 2026-03-07 00:52:31.245492 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.97s 2026-03-07 00:52:31.245499 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.73s 2026-03-07 00:52:31.245507 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.50s 2026-03-07 00:52:31.245521 | orchestrator | 2026-03-07 00:52:31 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:52:31.245641 | orchestrator | 2026-03-07 00:52:31 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:52:31.245652 | orchestrator | 2026-03-07 00:52:31 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:52:34.293548 | orchestrator | 2026-03-07 00:52:34 | INFO  | Task dcab45ef-8630-471e-94a6-64ca26753e28 is in state STARTED 2026-03-07 00:52:34.295232 | orchestrator | 2026-03-07 00:52:34 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:52:34.296236 | orchestrator | 2026-03-07 00:52:34 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:52:34.296277 | orchestrator | 2026-03-07 00:52:34 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:52:37.343777 | orchestrator | 2026-03-07 00:52:37 | INFO  | Task dcab45ef-8630-471e-94a6-64ca26753e28 is in state STARTED 2026-03-07 00:52:37.345762 | orchestrator | 2026-03-07 00:52:37 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:52:37.348226 | orchestrator | 2026-03-07 00:52:37 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:52:37.348641 | orchestrator | 2026-03-07 00:52:37 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:52:40.408267 | orchestrator | 2026-03-07 00:52:40 | INFO  | Task dcab45ef-8630-471e-94a6-64ca26753e28 is in state STARTED 2026-03-07 00:52:40.412883 | orchestrator | 2026-03-07 00:52:40 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:52:40.420588 | orchestrator | 2026-03-07 00:52:40 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:52:40.420664 | orchestrator | 2026-03-07 00:52:40 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:52:43.466432 | orchestrator | 2026-03-07 00:52:43 | INFO  | Task dcab45ef-8630-471e-94a6-64ca26753e28 is in state STARTED 2026-03-07 00:52:43.466545 | orchestrator | 2026-03-07 00:52:43 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:52:43.466553 | orchestrator | 2026-03-07 00:52:43 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:52:43.466558 | orchestrator | 2026-03-07 00:52:43 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:52:46.537060 | orchestrator | 2026-03-07 00:52:46 | INFO  | Task dcab45ef-8630-471e-94a6-64ca26753e28 is in state STARTED 2026-03-07 00:52:46.544174 | orchestrator | 2026-03-07 00:52:46 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:52:46.545366 | orchestrator | 2026-03-07 00:52:46 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:52:46.545468 | orchestrator | 2026-03-07 00:52:46 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:52:49.585765 | orchestrator | 2026-03-07 00:52:49 | INFO  | Task dcab45ef-8630-471e-94a6-64ca26753e28 is in state STARTED 2026-03-07 00:52:49.586198 | orchestrator | 2026-03-07 00:52:49 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:52:49.587172 | orchestrator | 2026-03-07 00:52:49 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:52:49.587236 | orchestrator | 2026-03-07 00:52:49 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:52:52.625014 | orchestrator | 2026-03-07 00:52:52 | INFO  | Task dcab45ef-8630-471e-94a6-64ca26753e28 is in state STARTED 2026-03-07 00:52:52.626710 | orchestrator | 2026-03-07 00:52:52 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:52:52.628145 | orchestrator | 2026-03-07 00:52:52 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:52:52.628238 | orchestrator | 2026-03-07 00:52:52 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:52:55.672226 | orchestrator | 2026-03-07 00:52:55 | INFO  | Task dcab45ef-8630-471e-94a6-64ca26753e28 is in state STARTED 2026-03-07 00:52:55.674400 | orchestrator | 2026-03-07 00:52:55 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:52:55.675274 | orchestrator | 2026-03-07 00:52:55 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:52:55.675309 | orchestrator | 2026-03-07 00:52:55 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:52:58.721483 | orchestrator | 2026-03-07 00:52:58 | INFO  | Task dcab45ef-8630-471e-94a6-64ca26753e28 is in state STARTED 2026-03-07 00:52:58.722110 | orchestrator | 2026-03-07 00:52:58 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:52:58.724795 | orchestrator | 2026-03-07 00:52:58 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:52:58.724828 | orchestrator | 2026-03-07 00:52:58 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:53:01.768007 | orchestrator | 2026-03-07 00:53:01 | INFO  | Task dcab45ef-8630-471e-94a6-64ca26753e28 is in state STARTED 2026-03-07 00:53:01.768800 | orchestrator | 2026-03-07 00:53:01 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:53:01.770622 | orchestrator | 2026-03-07 00:53:01 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:53:01.770651 | orchestrator | 2026-03-07 00:53:01 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:53:04.818601 | orchestrator | 2026-03-07 00:53:04 | INFO  | Task dcab45ef-8630-471e-94a6-64ca26753e28 is in state STARTED 2026-03-07 00:53:04.820386 | orchestrator | 2026-03-07 00:53:04 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:53:04.822753 | orchestrator | 2026-03-07 00:53:04 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:53:04.822813 | orchestrator | 2026-03-07 00:53:04 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:53:07.869830 | orchestrator | 2026-03-07 00:53:07 | INFO  | Task dcab45ef-8630-471e-94a6-64ca26753e28 is in state STARTED 2026-03-07 00:53:07.870758 | orchestrator | 2026-03-07 00:53:07 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:53:07.872699 | orchestrator | 2026-03-07 00:53:07 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:53:07.872763 | orchestrator | 2026-03-07 00:53:07 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:53:10.922420 | orchestrator | 2026-03-07 00:53:10 | INFO  | Task dcab45ef-8630-471e-94a6-64ca26753e28 is in state STARTED 2026-03-07 00:53:10.925898 | orchestrator | 2026-03-07 00:53:10 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:53:10.929482 | orchestrator | 2026-03-07 00:53:10 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:53:10.929585 | orchestrator | 2026-03-07 00:53:10 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:53:13.973194 | orchestrator | 2026-03-07 00:53:13 | INFO  | Task dcab45ef-8630-471e-94a6-64ca26753e28 is in state STARTED 2026-03-07 00:53:13.973791 | orchestrator | 2026-03-07 00:53:13 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:53:13.975087 | orchestrator | 2026-03-07 00:53:13 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:53:13.975143 | orchestrator | 2026-03-07 00:53:13 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:53:17.031210 | orchestrator | 2026-03-07 00:53:17 | INFO  | Task dcab45ef-8630-471e-94a6-64ca26753e28 is in state STARTED 2026-03-07 00:53:17.032451 | orchestrator | 2026-03-07 00:53:17 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:53:17.035491 | orchestrator | 2026-03-07 00:53:17 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:53:17.036450 | orchestrator | 2026-03-07 00:53:17 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:53:20.081197 | orchestrator | 2026-03-07 00:53:20 | INFO  | Task dcab45ef-8630-471e-94a6-64ca26753e28 is in state STARTED 2026-03-07 00:53:20.081423 | orchestrator | 2026-03-07 00:53:20 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:53:20.083597 | orchestrator | 2026-03-07 00:53:20 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:53:20.083679 | orchestrator | 2026-03-07 00:53:20 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:53:23.136969 | orchestrator | 2026-03-07 00:53:23 | INFO  | Task dcab45ef-8630-471e-94a6-64ca26753e28 is in state STARTED 2026-03-07 00:53:23.139704 | orchestrator | 2026-03-07 00:53:23 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:53:23.139792 | orchestrator | 2026-03-07 00:53:23 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:53:23.139818 | orchestrator | 2026-03-07 00:53:23 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:53:26.183957 | orchestrator | 2026-03-07 00:53:26 | INFO  | Task dcab45ef-8630-471e-94a6-64ca26753e28 is in state STARTED 2026-03-07 00:53:26.185147 | orchestrator | 2026-03-07 00:53:26 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:53:26.186469 | orchestrator | 2026-03-07 00:53:26 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:53:26.186520 | orchestrator | 2026-03-07 00:53:26 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:53:29.229739 | orchestrator | 2026-03-07 00:53:29 | INFO  | Task dcab45ef-8630-471e-94a6-64ca26753e28 is in state STARTED 2026-03-07 00:53:29.231849 | orchestrator | 2026-03-07 00:53:29 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:53:29.233790 | orchestrator | 2026-03-07 00:53:29 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:53:29.233820 | orchestrator | 2026-03-07 00:53:29 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:53:32.274417 | orchestrator | 2026-03-07 00:53:32 | INFO  | Task dcab45ef-8630-471e-94a6-64ca26753e28 is in state SUCCESS 2026-03-07 00:53:32.277458 | orchestrator | 2026-03-07 00:53:32.277623 | orchestrator | 2026-03-07 00:53:32.277643 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-07 00:53:32.277657 | orchestrator | 2026-03-07 00:53:32.277668 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-07 00:53:32.277680 | orchestrator | Saturday 07 March 2026 00:50:44 +0000 (0:00:00.220) 0:00:00.220 ******** 2026-03-07 00:53:32.277691 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:53:32.277704 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:53:32.277715 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:53:32.277726 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:53:32.277737 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:53:32.277747 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:53:32.277781 | orchestrator | 2026-03-07 00:53:32.277793 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-07 00:53:32.277804 | orchestrator | Saturday 07 March 2026 00:50:45 +0000 (0:00:01.232) 0:00:01.453 ******** 2026-03-07 00:53:32.277815 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-03-07 00:53:32.277826 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-03-07 00:53:32.277837 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-03-07 00:53:32.277848 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-03-07 00:53:32.277859 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-03-07 00:53:32.277870 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-03-07 00:53:32.277881 | orchestrator | 2026-03-07 00:53:32.277892 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-03-07 00:53:32.277903 | orchestrator | 2026-03-07 00:53:32.277913 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-03-07 00:53:32.277925 | orchestrator | Saturday 07 March 2026 00:50:47 +0000 (0:00:01.528) 0:00:02.982 ******** 2026-03-07 00:53:32.277937 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:53:32.277950 | orchestrator | 2026-03-07 00:53:32.277961 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-03-07 00:53:32.277972 | orchestrator | Saturday 07 March 2026 00:50:48 +0000 (0:00:01.706) 0:00:04.689 ******** 2026-03-07 00:53:32.277987 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.278012 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.278101 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.278116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.278129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.278160 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.278180 | orchestrator | 2026-03-07 00:53:32.278192 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-03-07 00:53:32.278203 | orchestrator | Saturday 07 March 2026 00:50:50 +0000 (0:00:01.559) 0:00:06.248 ******** 2026-03-07 00:53:32.278215 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.278228 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.278239 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.278251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.278267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.278279 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.278290 | orchestrator | 2026-03-07 00:53:32.278302 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-03-07 00:53:32.278313 | orchestrator | Saturday 07 March 2026 00:50:52 +0000 (0:00:02.266) 0:00:08.515 ******** 2026-03-07 00:53:32.278324 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.278342 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.278361 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.278371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.278381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.278391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.278401 | orchestrator | 2026-03-07 00:53:32.278411 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-03-07 00:53:32.278420 | orchestrator | Saturday 07 March 2026 00:50:54 +0000 (0:00:01.541) 0:00:10.057 ******** 2026-03-07 00:53:32.278434 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.278445 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.278455 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.278471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.278481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.278498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.278508 | orchestrator | 2026-03-07 00:53:32.278518 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-03-07 00:53:32.278527 | orchestrator | Saturday 07 March 2026 00:50:56 +0000 (0:00:02.036) 0:00:12.093 ******** 2026-03-07 00:53:32.278537 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.278566 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.278576 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.278590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.278601 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.278611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.278626 | orchestrator | 2026-03-07 00:53:32.278636 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-03-07 00:53:32.278646 | orchestrator | Saturday 07 March 2026 00:50:59 +0000 (0:00:02.783) 0:00:14.877 ******** 2026-03-07 00:53:32.278656 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:53:32.278666 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:53:32.278676 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:53:32.278686 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:53:32.278695 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:53:32.278705 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:53:32.278715 | orchestrator | 2026-03-07 00:53:32.278724 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-03-07 00:53:32.278734 | orchestrator | Saturday 07 March 2026 00:51:01 +0000 (0:00:02.922) 0:00:17.799 ******** 2026-03-07 00:53:32.278744 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-03-07 00:53:32.278754 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-03-07 00:53:32.278764 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-03-07 00:53:32.278812 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-03-07 00:53:32.278823 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-03-07 00:53:32.278833 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-03-07 00:53:32.278842 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-07 00:53:32.278852 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-07 00:53:32.278862 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-07 00:53:32.278871 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-07 00:53:32.278881 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-07 00:53:32.278890 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-07 00:53:32.278900 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-07 00:53:32.278911 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-07 00:53:32.278921 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-07 00:53:32.278931 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-07 00:53:32.278941 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-07 00:53:32.278950 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-07 00:53:32.278960 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-07 00:53:32.278971 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-07 00:53:32.278988 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-07 00:53:32.279003 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-07 00:53:32.279013 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-07 00:53:32.279022 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-07 00:53:32.279032 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-07 00:53:32.279041 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-07 00:53:32.279051 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-07 00:53:32.279060 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-07 00:53:32.279070 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-07 00:53:32.279079 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-07 00:53:32.279089 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-07 00:53:32.279098 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-07 00:53:32.279108 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-07 00:53:32.279117 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-07 00:53:32.279127 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-07 00:53:32.279137 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-07 00:53:32.279147 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-07 00:53:32.279156 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-07 00:53:32.279166 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-07 00:53:32.279176 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-07 00:53:32.279191 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-03-07 00:53:32.279201 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-07 00:53:32.279211 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-07 00:53:32.279221 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-03-07 00:53:32.279231 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-03-07 00:53:32.279241 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-03-07 00:53:32.279250 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-07 00:53:32.279260 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-03-07 00:53:32.279269 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-07 00:53:32.279285 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-07 00:53:32.279295 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-03-07 00:53:32.279305 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-07 00:53:32.279314 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-07 00:53:32.279324 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-07 00:53:32.279334 | orchestrator | 2026-03-07 00:53:32.279343 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-07 00:53:32.279353 | orchestrator | Saturday 07 March 2026 00:51:25 +0000 (0:00:23.570) 0:00:41.370 ******** 2026-03-07 00:53:32.279363 | orchestrator | 2026-03-07 00:53:32.279372 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-07 00:53:32.279382 | orchestrator | Saturday 07 March 2026 00:51:25 +0000 (0:00:00.121) 0:00:41.491 ******** 2026-03-07 00:53:32.279392 | orchestrator | 2026-03-07 00:53:32.279401 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-07 00:53:32.279411 | orchestrator | Saturday 07 March 2026 00:51:25 +0000 (0:00:00.180) 0:00:41.672 ******** 2026-03-07 00:53:32.279420 | orchestrator | 2026-03-07 00:53:32.279430 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-07 00:53:32.279439 | orchestrator | Saturday 07 March 2026 00:51:25 +0000 (0:00:00.111) 0:00:41.783 ******** 2026-03-07 00:53:32.279449 | orchestrator | 2026-03-07 00:53:32.279458 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-07 00:53:32.279468 | orchestrator | Saturday 07 March 2026 00:51:26 +0000 (0:00:00.157) 0:00:41.941 ******** 2026-03-07 00:53:32.279478 | orchestrator | 2026-03-07 00:53:32.279487 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-07 00:53:32.279496 | orchestrator | Saturday 07 March 2026 00:51:26 +0000 (0:00:00.146) 0:00:42.087 ******** 2026-03-07 00:53:32.279506 | orchestrator | 2026-03-07 00:53:32.279516 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-03-07 00:53:32.279525 | orchestrator | Saturday 07 March 2026 00:51:26 +0000 (0:00:00.079) 0:00:42.167 ******** 2026-03-07 00:53:32.279535 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:53:32.279590 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:53:32.279600 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:53:32.279610 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:53:32.279620 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:53:32.279629 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:53:32.279638 | orchestrator | 2026-03-07 00:53:32.279648 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-03-07 00:53:32.279658 | orchestrator | Saturday 07 March 2026 00:51:28 +0000 (0:00:01.883) 0:00:44.051 ******** 2026-03-07 00:53:32.279668 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:53:32.279677 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:53:32.279687 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:53:32.279697 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:53:32.279707 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:53:32.279716 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:53:32.279726 | orchestrator | 2026-03-07 00:53:32.279735 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-03-07 00:53:32.279745 | orchestrator | 2026-03-07 00:53:32.279755 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-07 00:53:32.279764 | orchestrator | Saturday 07 March 2026 00:52:00 +0000 (0:00:31.817) 0:01:15.868 ******** 2026-03-07 00:53:32.279774 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:53:32.279790 | orchestrator | 2026-03-07 00:53:32.279864 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-07 00:53:32.279883 | orchestrator | Saturday 07 March 2026 00:52:01 +0000 (0:00:01.200) 0:01:17.069 ******** 2026-03-07 00:53:32.279893 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:53:32.279903 | orchestrator | 2026-03-07 00:53:32.279921 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-03-07 00:53:32.279932 | orchestrator | Saturday 07 March 2026 00:52:01 +0000 (0:00:00.779) 0:01:17.848 ******** 2026-03-07 00:53:32.279941 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:53:32.279951 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:53:32.279961 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:53:32.279971 | orchestrator | 2026-03-07 00:53:32.279981 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-03-07 00:53:32.279991 | orchestrator | Saturday 07 March 2026 00:52:03 +0000 (0:00:01.902) 0:01:19.752 ******** 2026-03-07 00:53:32.280000 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:53:32.280010 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:53:32.280019 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:53:32.280029 | orchestrator | 2026-03-07 00:53:32.280038 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-03-07 00:53:32.280048 | orchestrator | Saturday 07 March 2026 00:52:04 +0000 (0:00:00.425) 0:01:20.177 ******** 2026-03-07 00:53:32.280057 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:53:32.280067 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:53:32.280076 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:53:32.280086 | orchestrator | 2026-03-07 00:53:32.280096 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-03-07 00:53:32.280105 | orchestrator | Saturday 07 March 2026 00:52:04 +0000 (0:00:00.527) 0:01:20.704 ******** 2026-03-07 00:53:32.280114 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:53:32.280124 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:53:32.280133 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:53:32.280143 | orchestrator | 2026-03-07 00:53:32.280152 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-03-07 00:53:32.280162 | orchestrator | Saturday 07 March 2026 00:52:05 +0000 (0:00:00.820) 0:01:21.524 ******** 2026-03-07 00:53:32.280171 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:53:32.280181 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:53:32.280190 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:53:32.280199 | orchestrator | 2026-03-07 00:53:32.280209 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-03-07 00:53:32.280218 | orchestrator | Saturday 07 March 2026 00:52:06 +0000 (0:00:00.796) 0:01:22.321 ******** 2026-03-07 00:53:32.280228 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:53:32.280238 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:53:32.280247 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:53:32.280257 | orchestrator | 2026-03-07 00:53:32.280267 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-03-07 00:53:32.280276 | orchestrator | Saturday 07 March 2026 00:52:06 +0000 (0:00:00.375) 0:01:22.696 ******** 2026-03-07 00:53:32.280286 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:53:32.280295 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:53:32.280305 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:53:32.280314 | orchestrator | 2026-03-07 00:53:32.280324 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-03-07 00:53:32.280333 | orchestrator | Saturday 07 March 2026 00:52:07 +0000 (0:00:00.342) 0:01:23.039 ******** 2026-03-07 00:53:32.280343 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:53:32.280357 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:53:32.280367 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:53:32.280376 | orchestrator | 2026-03-07 00:53:32.280386 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-03-07 00:53:32.280403 | orchestrator | Saturday 07 March 2026 00:52:07 +0000 (0:00:00.342) 0:01:23.381 ******** 2026-03-07 00:53:32.280413 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:53:32.280423 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:53:32.280432 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:53:32.280442 | orchestrator | 2026-03-07 00:53:32.280451 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-03-07 00:53:32.280461 | orchestrator | Saturday 07 March 2026 00:52:08 +0000 (0:00:00.621) 0:01:24.003 ******** 2026-03-07 00:53:32.280470 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:53:32.280480 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:53:32.280489 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:53:32.280499 | orchestrator | 2026-03-07 00:53:32.280508 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-03-07 00:53:32.280518 | orchestrator | Saturday 07 March 2026 00:52:08 +0000 (0:00:00.347) 0:01:24.350 ******** 2026-03-07 00:53:32.280527 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:53:32.280537 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:53:32.280574 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:53:32.280584 | orchestrator | 2026-03-07 00:53:32.280600 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-03-07 00:53:32.280617 | orchestrator | Saturday 07 March 2026 00:52:08 +0000 (0:00:00.339) 0:01:24.690 ******** 2026-03-07 00:53:32.280634 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:53:32.280651 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:53:32.280665 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:53:32.280681 | orchestrator | 2026-03-07 00:53:32.280696 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-03-07 00:53:32.280712 | orchestrator | Saturday 07 March 2026 00:52:09 +0000 (0:00:00.319) 0:01:25.009 ******** 2026-03-07 00:53:32.280727 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:53:32.280743 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:53:32.280758 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:53:32.280774 | orchestrator | 2026-03-07 00:53:32.280790 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-03-07 00:53:32.280806 | orchestrator | Saturday 07 March 2026 00:52:09 +0000 (0:00:00.658) 0:01:25.668 ******** 2026-03-07 00:53:32.280822 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:53:32.280839 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:53:32.280855 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:53:32.280870 | orchestrator | 2026-03-07 00:53:32.280885 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-03-07 00:53:32.280901 | orchestrator | Saturday 07 March 2026 00:52:10 +0000 (0:00:00.408) 0:01:26.077 ******** 2026-03-07 00:53:32.280918 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:53:32.280934 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:53:32.280951 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:53:32.280968 | orchestrator | 2026-03-07 00:53:32.280995 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-03-07 00:53:32.281013 | orchestrator | Saturday 07 March 2026 00:52:10 +0000 (0:00:00.379) 0:01:26.456 ******** 2026-03-07 00:53:32.281030 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:53:32.281047 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:53:32.281065 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:53:32.281081 | orchestrator | 2026-03-07 00:53:32.281098 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-03-07 00:53:32.281114 | orchestrator | Saturday 07 March 2026 00:52:10 +0000 (0:00:00.387) 0:01:26.844 ******** 2026-03-07 00:53:32.281131 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:53:32.281149 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:53:32.281165 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:53:32.281182 | orchestrator | 2026-03-07 00:53:32.281199 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-07 00:53:32.281216 | orchestrator | Saturday 07 March 2026 00:52:11 +0000 (0:00:00.603) 0:01:27.448 ******** 2026-03-07 00:53:32.281245 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:53:32.281261 | orchestrator | 2026-03-07 00:53:32.281277 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-03-07 00:53:32.281294 | orchestrator | Saturday 07 March 2026 00:52:12 +0000 (0:00:00.717) 0:01:28.166 ******** 2026-03-07 00:53:32.281310 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:53:32.281327 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:53:32.281344 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:53:32.281360 | orchestrator | 2026-03-07 00:53:32.281376 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-03-07 00:53:32.281393 | orchestrator | Saturday 07 March 2026 00:52:12 +0000 (0:00:00.587) 0:01:28.753 ******** 2026-03-07 00:53:32.281409 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:53:32.281425 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:53:32.281441 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:53:32.281458 | orchestrator | 2026-03-07 00:53:32.281475 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-03-07 00:53:32.281491 | orchestrator | Saturday 07 March 2026 00:52:13 +0000 (0:00:00.867) 0:01:29.621 ******** 2026-03-07 00:53:32.281508 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:53:32.281525 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:53:32.281541 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:53:32.281583 | orchestrator | 2026-03-07 00:53:32.281594 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-03-07 00:53:32.281604 | orchestrator | Saturday 07 March 2026 00:52:14 +0000 (0:00:00.692) 0:01:30.314 ******** 2026-03-07 00:53:32.281613 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:53:32.281623 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:53:32.281632 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:53:32.281642 | orchestrator | 2026-03-07 00:53:32.281652 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-03-07 00:53:32.281662 | orchestrator | Saturday 07 March 2026 00:52:14 +0000 (0:00:00.472) 0:01:30.786 ******** 2026-03-07 00:53:32.281678 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:53:32.281688 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:53:32.281698 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:53:32.281707 | orchestrator | 2026-03-07 00:53:32.281717 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-03-07 00:53:32.281727 | orchestrator | Saturday 07 March 2026 00:52:15 +0000 (0:00:00.400) 0:01:31.187 ******** 2026-03-07 00:53:32.281737 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:53:32.281746 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:53:32.281756 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:53:32.281766 | orchestrator | 2026-03-07 00:53:32.281776 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-03-07 00:53:32.281786 | orchestrator | Saturday 07 March 2026 00:52:15 +0000 (0:00:00.406) 0:01:31.594 ******** 2026-03-07 00:53:32.281795 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:53:32.281805 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:53:32.281814 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:53:32.281824 | orchestrator | 2026-03-07 00:53:32.281833 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-03-07 00:53:32.281843 | orchestrator | Saturday 07 March 2026 00:52:16 +0000 (0:00:00.641) 0:01:32.235 ******** 2026-03-07 00:53:32.281852 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:53:32.281862 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:53:32.281871 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:53:32.281881 | orchestrator | 2026-03-07 00:53:32.281891 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-07 00:53:32.281900 | orchestrator | Saturday 07 March 2026 00:52:16 +0000 (0:00:00.368) 0:01:32.603 ******** 2026-03-07 00:53:32.281912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.281933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.281952 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.281964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.281977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.281988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.281998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.282013 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.282083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.282094 | orchestrator | 2026-03-07 00:53:32.282104 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-07 00:53:32.282114 | orchestrator | Saturday 07 March 2026 00:52:18 +0000 (0:00:01.689) 0:01:34.293 ******** 2026-03-07 00:53:32.282132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.282143 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.282153 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.282171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.282182 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.282192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.282202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.282217 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.282227 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.282237 | orchestrator | 2026-03-07 00:53:32.282247 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-07 00:53:32.282263 | orchestrator | Saturday 07 March 2026 00:52:23 +0000 (0:00:05.250) 0:01:39.544 ******** 2026-03-07 00:53:32.282273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.282283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.282293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.282311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.282322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.282332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.282342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.282352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.282366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.282383 | orchestrator | 2026-03-07 00:53:32.282393 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-07 00:53:32.282403 | orchestrator | Saturday 07 March 2026 00:52:26 +0000 (0:00:02.659) 0:01:42.203 ******** 2026-03-07 00:53:32.282413 | orchestrator | 2026-03-07 00:53:32.282423 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-07 00:53:32.282433 | orchestrator | Saturday 07 March 2026 00:52:26 +0000 (0:00:00.076) 0:01:42.279 ******** 2026-03-07 00:53:32.282442 | orchestrator | 2026-03-07 00:53:32.282452 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-07 00:53:32.282462 | orchestrator | Saturday 07 March 2026 00:52:26 +0000 (0:00:00.072) 0:01:42.351 ******** 2026-03-07 00:53:32.282472 | orchestrator | 2026-03-07 00:53:32.282482 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-07 00:53:32.282491 | orchestrator | Saturday 07 March 2026 00:52:26 +0000 (0:00:00.096) 0:01:42.448 ******** 2026-03-07 00:53:32.282501 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:53:32.282511 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:53:32.282521 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:53:32.282531 | orchestrator | 2026-03-07 00:53:32.282541 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-07 00:53:32.282573 | orchestrator | Saturday 07 March 2026 00:52:29 +0000 (0:00:02.743) 0:01:45.191 ******** 2026-03-07 00:53:32.282582 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:53:32.282592 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:53:32.282602 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:53:32.282611 | orchestrator | 2026-03-07 00:53:32.282621 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-07 00:53:32.282631 | orchestrator | Saturday 07 March 2026 00:52:36 +0000 (0:00:06.679) 0:01:51.870 ******** 2026-03-07 00:53:32.282640 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:53:32.282650 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:53:32.282660 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:53:32.282670 | orchestrator | 2026-03-07 00:53:32.282680 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-07 00:53:32.282689 | orchestrator | Saturday 07 March 2026 00:52:44 +0000 (0:00:08.473) 0:02:00.344 ******** 2026-03-07 00:53:32.282699 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:53:32.282709 | orchestrator | 2026-03-07 00:53:32.282718 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-07 00:53:32.282728 | orchestrator | Saturday 07 March 2026 00:52:44 +0000 (0:00:00.138) 0:02:00.483 ******** 2026-03-07 00:53:32.282738 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:53:32.282747 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:53:32.282757 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:53:32.282767 | orchestrator | 2026-03-07 00:53:32.282784 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-07 00:53:32.282794 | orchestrator | Saturday 07 March 2026 00:52:46 +0000 (0:00:01.419) 0:02:01.902 ******** 2026-03-07 00:53:32.282803 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:53:32.282813 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:53:32.282823 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:53:32.282833 | orchestrator | 2026-03-07 00:53:32.282842 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-07 00:53:32.282852 | orchestrator | Saturday 07 March 2026 00:52:46 +0000 (0:00:00.696) 0:02:02.598 ******** 2026-03-07 00:53:32.282862 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:53:32.282871 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:53:32.282881 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:53:32.282891 | orchestrator | 2026-03-07 00:53:32.282901 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-07 00:53:32.282910 | orchestrator | Saturday 07 March 2026 00:52:47 +0000 (0:00:01.088) 0:02:03.687 ******** 2026-03-07 00:53:32.282920 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:53:32.282930 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:53:32.282947 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:53:32.282957 | orchestrator | 2026-03-07 00:53:32.282966 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-07 00:53:32.282976 | orchestrator | Saturday 07 March 2026 00:52:49 +0000 (0:00:01.377) 0:02:05.065 ******** 2026-03-07 00:53:32.282986 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:53:32.282996 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:53:32.283005 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:53:32.283015 | orchestrator | 2026-03-07 00:53:32.283024 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-07 00:53:32.283034 | orchestrator | Saturday 07 March 2026 00:52:50 +0000 (0:00:01.146) 0:02:06.211 ******** 2026-03-07 00:53:32.283044 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:53:32.283054 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:53:32.283064 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:53:32.283073 | orchestrator | 2026-03-07 00:53:32.283083 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-03-07 00:53:32.283093 | orchestrator | Saturday 07 March 2026 00:52:51 +0000 (0:00:00.900) 0:02:07.112 ******** 2026-03-07 00:53:32.283102 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:53:32.283112 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:53:32.283122 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:53:32.283131 | orchestrator | 2026-03-07 00:53:32.283141 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-07 00:53:32.283151 | orchestrator | Saturday 07 March 2026 00:52:51 +0000 (0:00:00.428) 0:02:07.540 ******** 2026-03-07 00:53:32.283161 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.283177 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.283188 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.283198 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.283208 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.283219 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.283245 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.283256 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.283266 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.283276 | orchestrator | 2026-03-07 00:53:32.283286 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-07 00:53:32.283295 | orchestrator | Saturday 07 March 2026 00:52:53 +0000 (0:00:02.186) 0:02:09.727 ******** 2026-03-07 00:53:32.283305 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.283320 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.283330 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.283340 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.283351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.283360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.283388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.283398 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.283409 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.283419 | orchestrator | 2026-03-07 00:53:32.283428 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-07 00:53:32.283438 | orchestrator | Saturday 07 March 2026 00:52:58 +0000 (0:00:04.769) 0:02:14.496 ******** 2026-03-07 00:53:32.283448 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.283458 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.283473 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.283484 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.283494 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.283504 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.283520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.283538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.283595 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 00:53:32.283606 | orchestrator | 2026-03-07 00:53:32.283616 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-07 00:53:32.283625 | orchestrator | Saturday 07 March 2026 00:53:01 +0000 (0:00:03.200) 0:02:17.696 ******** 2026-03-07 00:53:32.283635 | orchestrator | 2026-03-07 00:53:32.283643 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-07 00:53:32.283651 | orchestrator | Saturday 07 March 2026 00:53:01 +0000 (0:00:00.068) 0:02:17.765 ******** 2026-03-07 00:53:32.283659 | orchestrator | 2026-03-07 00:53:32.283667 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-07 00:53:32.283675 | orchestrator | Saturday 07 March 2026 00:53:02 +0000 (0:00:00.107) 0:02:17.872 ******** 2026-03-07 00:53:32.283683 | orchestrator | 2026-03-07 00:53:32.283691 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-07 00:53:32.283698 | orchestrator | Saturday 07 March 2026 00:53:02 +0000 (0:00:00.094) 0:02:17.967 ******** 2026-03-07 00:53:32.283706 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:53:32.283714 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:53:32.283722 | orchestrator | 2026-03-07 00:53:32.283730 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-07 00:53:32.283738 | orchestrator | Saturday 07 March 2026 00:53:08 +0000 (0:00:06.325) 0:02:24.292 ******** 2026-03-07 00:53:32.283745 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:53:32.283754 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:53:32.283762 | orchestrator | 2026-03-07 00:53:32.283769 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-07 00:53:32.283778 | orchestrator | Saturday 07 March 2026 00:53:15 +0000 (0:00:06.989) 0:02:31.287 ******** 2026-03-07 00:53:32.283785 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:53:32.283793 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:53:32.283801 | orchestrator | 2026-03-07 00:53:32.283809 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-07 00:53:32.283817 | orchestrator | Saturday 07 March 2026 00:53:22 +0000 (0:00:06.656) 0:02:37.944 ******** 2026-03-07 00:53:32.283825 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:53:32.283833 | orchestrator | 2026-03-07 00:53:32.283846 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-07 00:53:32.283854 | orchestrator | Saturday 07 March 2026 00:53:22 +0000 (0:00:00.171) 0:02:38.115 ******** 2026-03-07 00:53:32.283862 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:53:32.283875 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:53:32.283883 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:53:32.283891 | orchestrator | 2026-03-07 00:53:32.283899 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-07 00:53:32.283907 | orchestrator | Saturday 07 March 2026 00:53:23 +0000 (0:00:00.989) 0:02:39.105 ******** 2026-03-07 00:53:32.283914 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:53:32.283922 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:53:32.283930 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:53:32.283938 | orchestrator | 2026-03-07 00:53:32.283946 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-07 00:53:32.283954 | orchestrator | Saturday 07 March 2026 00:53:24 +0000 (0:00:00.846) 0:02:39.952 ******** 2026-03-07 00:53:32.283962 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:53:32.283970 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:53:32.283978 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:53:32.283986 | orchestrator | 2026-03-07 00:53:32.283994 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-07 00:53:32.284002 | orchestrator | Saturday 07 March 2026 00:53:25 +0000 (0:00:01.135) 0:02:41.088 ******** 2026-03-07 00:53:32.284009 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:53:32.284017 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:53:32.284025 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:53:32.284033 | orchestrator | 2026-03-07 00:53:32.284041 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-07 00:53:32.284049 | orchestrator | Saturday 07 March 2026 00:53:26 +0000 (0:00:00.886) 0:02:41.974 ******** 2026-03-07 00:53:32.284057 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:53:32.284065 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:53:32.284073 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:53:32.284081 | orchestrator | 2026-03-07 00:53:32.284089 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-07 00:53:32.284097 | orchestrator | Saturday 07 March 2026 00:53:27 +0000 (0:00:00.924) 0:02:42.899 ******** 2026-03-07 00:53:32.284105 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:53:32.284113 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:53:32.284121 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:53:32.284128 | orchestrator | 2026-03-07 00:53:32.284136 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:53:32.284145 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-07 00:53:32.284154 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-07 00:53:32.284167 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-07 00:53:32.284176 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:53:32.284184 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:53:32.284192 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 00:53:32.284200 | orchestrator | 2026-03-07 00:53:32.284208 | orchestrator | 2026-03-07 00:53:32.284216 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:53:32.284224 | orchestrator | Saturday 07 March 2026 00:53:28 +0000 (0:00:01.187) 0:02:44.086 ******** 2026-03-07 00:53:32.284232 | orchestrator | =============================================================================== 2026-03-07 00:53:32.284240 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 31.82s 2026-03-07 00:53:32.284254 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 23.57s 2026-03-07 00:53:32.284262 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 15.13s 2026-03-07 00:53:32.284270 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 13.67s 2026-03-07 00:53:32.284278 | orchestrator | ovn-db : Restart ovn-nb-db container ------------------------------------ 9.07s 2026-03-07 00:53:32.284285 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 5.25s 2026-03-07 00:53:32.284293 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.77s 2026-03-07 00:53:32.284301 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.20s 2026-03-07 00:53:32.284309 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.92s 2026-03-07 00:53:32.284317 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 2.78s 2026-03-07 00:53:32.284325 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.66s 2026-03-07 00:53:32.284333 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.27s 2026-03-07 00:53:32.284341 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 2.19s 2026-03-07 00:53:32.284349 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.04s 2026-03-07 00:53:32.284357 | orchestrator | ovn-db : Checking for any existing OVN DB container volumes ------------- 1.90s 2026-03-07 00:53:32.284368 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.88s 2026-03-07 00:53:32.284377 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.71s 2026-03-07 00:53:32.284384 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.69s 2026-03-07 00:53:32.284392 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.56s 2026-03-07 00:53:32.284400 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.54s 2026-03-07 00:53:32.284408 | orchestrator | 2026-03-07 00:53:32 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:53:32.284417 | orchestrator | 2026-03-07 00:53:32 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:53:32.284425 | orchestrator | 2026-03-07 00:53:32 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:53:35.333267 | orchestrator | 2026-03-07 00:53:35 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:53:35.335025 | orchestrator | 2026-03-07 00:53:35 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:53:35.335086 | orchestrator | 2026-03-07 00:53:35 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:53:38.392014 | orchestrator | 2026-03-07 00:53:38 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:53:38.393863 | orchestrator | 2026-03-07 00:53:38 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:53:38.393923 | orchestrator | 2026-03-07 00:53:38 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:53:41.449554 | orchestrator | 2026-03-07 00:53:41 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:53:41.450912 | orchestrator | 2026-03-07 00:53:41 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:53:41.450982 | orchestrator | 2026-03-07 00:53:41 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:53:44.500719 | orchestrator | 2026-03-07 00:53:44 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:53:44.501202 | orchestrator | 2026-03-07 00:53:44 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:53:44.502295 | orchestrator | 2026-03-07 00:53:44 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:53:47.548064 | orchestrator | 2026-03-07 00:53:47 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:53:47.550129 | orchestrator | 2026-03-07 00:53:47 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:53:47.550215 | orchestrator | 2026-03-07 00:53:47 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:53:50.596747 | orchestrator | 2026-03-07 00:53:50 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:53:50.598422 | orchestrator | 2026-03-07 00:53:50 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:53:50.598468 | orchestrator | 2026-03-07 00:53:50 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:53:53.635101 | orchestrator | 2026-03-07 00:53:53 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:53:53.636316 | orchestrator | 2026-03-07 00:53:53 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:53:53.637479 | orchestrator | 2026-03-07 00:53:53 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:53:56.682739 | orchestrator | 2026-03-07 00:53:56 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:53:56.684113 | orchestrator | 2026-03-07 00:53:56 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:53:56.684162 | orchestrator | 2026-03-07 00:53:56 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:53:59.735615 | orchestrator | 2026-03-07 00:53:59 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:53:59.737009 | orchestrator | 2026-03-07 00:53:59 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:53:59.737041 | orchestrator | 2026-03-07 00:53:59 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:54:02.784336 | orchestrator | 2026-03-07 00:54:02 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:54:02.785964 | orchestrator | 2026-03-07 00:54:02 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:54:02.786012 | orchestrator | 2026-03-07 00:54:02 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:54:05.828557 | orchestrator | 2026-03-07 00:54:05 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:54:05.831075 | orchestrator | 2026-03-07 00:54:05 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:54:05.831148 | orchestrator | 2026-03-07 00:54:05 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:54:08.880549 | orchestrator | 2026-03-07 00:54:08 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:54:08.882222 | orchestrator | 2026-03-07 00:54:08 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:54:08.882393 | orchestrator | 2026-03-07 00:54:08 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:54:11.929823 | orchestrator | 2026-03-07 00:54:11 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:54:11.929991 | orchestrator | 2026-03-07 00:54:11 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:54:11.930143 | orchestrator | 2026-03-07 00:54:11 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:54:14.971465 | orchestrator | 2026-03-07 00:54:14 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:54:14.973786 | orchestrator | 2026-03-07 00:54:14 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:54:14.973872 | orchestrator | 2026-03-07 00:54:14 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:54:18.024394 | orchestrator | 2026-03-07 00:54:18 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:54:18.033346 | orchestrator | 2026-03-07 00:54:18 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:54:18.033419 | orchestrator | 2026-03-07 00:54:18 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:54:21.068456 | orchestrator | 2026-03-07 00:54:21 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:54:21.068693 | orchestrator | 2026-03-07 00:54:21 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:54:21.068708 | orchestrator | 2026-03-07 00:54:21 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:54:24.115444 | orchestrator | 2026-03-07 00:54:24 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:54:24.115788 | orchestrator | 2026-03-07 00:54:24 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:54:24.115819 | orchestrator | 2026-03-07 00:54:24 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:54:27.146076 | orchestrator | 2026-03-07 00:54:27 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:54:27.148972 | orchestrator | 2026-03-07 00:54:27 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:54:27.149024 | orchestrator | 2026-03-07 00:54:27 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:54:30.195402 | orchestrator | 2026-03-07 00:54:30 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:54:30.198547 | orchestrator | 2026-03-07 00:54:30 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:54:30.198667 | orchestrator | 2026-03-07 00:54:30 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:54:33.241173 | orchestrator | 2026-03-07 00:54:33 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:54:33.243590 | orchestrator | 2026-03-07 00:54:33 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:54:33.243831 | orchestrator | 2026-03-07 00:54:33 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:54:36.305836 | orchestrator | 2026-03-07 00:54:36 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:54:36.306327 | orchestrator | 2026-03-07 00:54:36 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:54:36.307346 | orchestrator | 2026-03-07 00:54:36 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:54:39.360769 | orchestrator | 2026-03-07 00:54:39 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:54:39.362819 | orchestrator | 2026-03-07 00:54:39 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:54:39.362876 | orchestrator | 2026-03-07 00:54:39 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:54:42.389105 | orchestrator | 2026-03-07 00:54:42 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:54:42.389398 | orchestrator | 2026-03-07 00:54:42 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:54:42.389426 | orchestrator | 2026-03-07 00:54:42 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:54:45.439333 | orchestrator | 2026-03-07 00:54:45 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:54:45.440066 | orchestrator | 2026-03-07 00:54:45 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:54:45.440156 | orchestrator | 2026-03-07 00:54:45 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:54:48.482321 | orchestrator | 2026-03-07 00:54:48 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:54:48.483456 | orchestrator | 2026-03-07 00:54:48 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:54:48.483502 | orchestrator | 2026-03-07 00:54:48 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:54:51.511066 | orchestrator | 2026-03-07 00:54:51 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:54:51.514282 | orchestrator | 2026-03-07 00:54:51 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:54:51.514339 | orchestrator | 2026-03-07 00:54:51 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:54:54.554323 | orchestrator | 2026-03-07 00:54:54 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:54:54.558795 | orchestrator | 2026-03-07 00:54:54 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:54:54.558888 | orchestrator | 2026-03-07 00:54:54 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:54:57.610836 | orchestrator | 2026-03-07 00:54:57 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:54:57.612070 | orchestrator | 2026-03-07 00:54:57 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:54:57.612142 | orchestrator | 2026-03-07 00:54:57 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:55:00.665834 | orchestrator | 2026-03-07 00:55:00 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:55:00.667909 | orchestrator | 2026-03-07 00:55:00 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:55:00.667982 | orchestrator | 2026-03-07 00:55:00 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:55:03.716556 | orchestrator | 2026-03-07 00:55:03 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:55:03.717995 | orchestrator | 2026-03-07 00:55:03 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:55:03.718087 | orchestrator | 2026-03-07 00:55:03 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:55:06.778748 | orchestrator | 2026-03-07 00:55:06 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:55:06.780085 | orchestrator | 2026-03-07 00:55:06 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:55:06.780262 | orchestrator | 2026-03-07 00:55:06 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:55:09.838549 | orchestrator | 2026-03-07 00:55:09 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:55:09.840734 | orchestrator | 2026-03-07 00:55:09 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:55:09.840777 | orchestrator | 2026-03-07 00:55:09 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:55:12.895539 | orchestrator | 2026-03-07 00:55:12 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:55:12.895827 | orchestrator | 2026-03-07 00:55:12 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:55:12.896040 | orchestrator | 2026-03-07 00:55:12 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:55:15.952624 | orchestrator | 2026-03-07 00:55:15 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:55:15.954505 | orchestrator | 2026-03-07 00:55:15 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:55:15.954572 | orchestrator | 2026-03-07 00:55:15 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:55:19.007204 | orchestrator | 2026-03-07 00:55:19 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:55:19.009832 | orchestrator | 2026-03-07 00:55:19 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:55:19.009963 | orchestrator | 2026-03-07 00:55:19 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:55:22.056271 | orchestrator | 2026-03-07 00:55:22 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:55:22.058003 | orchestrator | 2026-03-07 00:55:22 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:55:22.058113 | orchestrator | 2026-03-07 00:55:22 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:55:25.116085 | orchestrator | 2026-03-07 00:55:25 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:55:25.118750 | orchestrator | 2026-03-07 00:55:25 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:55:25.118837 | orchestrator | 2026-03-07 00:55:25 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:55:28.170001 | orchestrator | 2026-03-07 00:55:28 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:55:28.173239 | orchestrator | 2026-03-07 00:55:28 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:55:28.173317 | orchestrator | 2026-03-07 00:55:28 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:55:31.217217 | orchestrator | 2026-03-07 00:55:31 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:55:31.218288 | orchestrator | 2026-03-07 00:55:31 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:55:31.218352 | orchestrator | 2026-03-07 00:55:31 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:55:34.272455 | orchestrator | 2026-03-07 00:55:34 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:55:34.274592 | orchestrator | 2026-03-07 00:55:34 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:55:34.275207 | orchestrator | 2026-03-07 00:55:34 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:55:37.321554 | orchestrator | 2026-03-07 00:55:37 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:55:37.322308 | orchestrator | 2026-03-07 00:55:37 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:55:37.322336 | orchestrator | 2026-03-07 00:55:37 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:55:40.374628 | orchestrator | 2026-03-07 00:55:40 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:55:40.376986 | orchestrator | 2026-03-07 00:55:40 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:55:40.377059 | orchestrator | 2026-03-07 00:55:40 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:55:43.419781 | orchestrator | 2026-03-07 00:55:43 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:55:43.420357 | orchestrator | 2026-03-07 00:55:43 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:55:43.420750 | orchestrator | 2026-03-07 00:55:43 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:55:46.475644 | orchestrator | 2026-03-07 00:55:46 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:55:46.477317 | orchestrator | 2026-03-07 00:55:46 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:55:46.477728 | orchestrator | 2026-03-07 00:55:46 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:55:49.536947 | orchestrator | 2026-03-07 00:55:49 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:55:49.538589 | orchestrator | 2026-03-07 00:55:49 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:55:49.538622 | orchestrator | 2026-03-07 00:55:49 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:55:52.659355 | orchestrator | 2026-03-07 00:55:52 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:55:52.660137 | orchestrator | 2026-03-07 00:55:52 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:55:52.660199 | orchestrator | 2026-03-07 00:55:52 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:55:55.698559 | orchestrator | 2026-03-07 00:55:55 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:55:55.700321 | orchestrator | 2026-03-07 00:55:55 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:55:55.700436 | orchestrator | 2026-03-07 00:55:55 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:55:58.760667 | orchestrator | 2026-03-07 00:55:58 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:55:58.762232 | orchestrator | 2026-03-07 00:55:58 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:55:58.763278 | orchestrator | 2026-03-07 00:55:58 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:56:01.802095 | orchestrator | 2026-03-07 00:56:01 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:56:01.802600 | orchestrator | 2026-03-07 00:56:01 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:56:01.804073 | orchestrator | 2026-03-07 00:56:01 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:56:04.840806 | orchestrator | 2026-03-07 00:56:04 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:56:04.844038 | orchestrator | 2026-03-07 00:56:04 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:56:04.844108 | orchestrator | 2026-03-07 00:56:04 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:56:07.888747 | orchestrator | 2026-03-07 00:56:07 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:56:07.891282 | orchestrator | 2026-03-07 00:56:07 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:56:07.891316 | orchestrator | 2026-03-07 00:56:07 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:56:10.926597 | orchestrator | 2026-03-07 00:56:10 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:56:10.927751 | orchestrator | 2026-03-07 00:56:10 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:56:10.927827 | orchestrator | 2026-03-07 00:56:10 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:56:13.970829 | orchestrator | 2026-03-07 00:56:13 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:56:13.972240 | orchestrator | 2026-03-07 00:56:13 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:56:13.972355 | orchestrator | 2026-03-07 00:56:13 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:56:17.021785 | orchestrator | 2026-03-07 00:56:17 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:56:17.021903 | orchestrator | 2026-03-07 00:56:17 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:56:17.021922 | orchestrator | 2026-03-07 00:56:17 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:56:20.058774 | orchestrator | 2026-03-07 00:56:20 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:56:20.059882 | orchestrator | 2026-03-07 00:56:20 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:56:20.059914 | orchestrator | 2026-03-07 00:56:20 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:56:23.094935 | orchestrator | 2026-03-07 00:56:23 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:56:23.096607 | orchestrator | 2026-03-07 00:56:23 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:56:23.096656 | orchestrator | 2026-03-07 00:56:23 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:56:26.130974 | orchestrator | 2026-03-07 00:56:26 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:56:26.132392 | orchestrator | 2026-03-07 00:56:26 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:56:26.134511 | orchestrator | 2026-03-07 00:56:26 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:56:29.167706 | orchestrator | 2026-03-07 00:56:29 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:56:29.169175 | orchestrator | 2026-03-07 00:56:29 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:56:29.169242 | orchestrator | 2026-03-07 00:56:29 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:56:32.222265 | orchestrator | 2026-03-07 00:56:32 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:56:32.225361 | orchestrator | 2026-03-07 00:56:32 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:56:32.225456 | orchestrator | 2026-03-07 00:56:32 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:56:35.276231 | orchestrator | 2026-03-07 00:56:35 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:56:35.279185 | orchestrator | 2026-03-07 00:56:35 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:56:35.279303 | orchestrator | 2026-03-07 00:56:35 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:56:38.315671 | orchestrator | 2026-03-07 00:56:38 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:56:38.318193 | orchestrator | 2026-03-07 00:56:38 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:56:38.318256 | orchestrator | 2026-03-07 00:56:38 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:56:41.366418 | orchestrator | 2026-03-07 00:56:41 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:56:41.368945 | orchestrator | 2026-03-07 00:56:41 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:56:41.369107 | orchestrator | 2026-03-07 00:56:41 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:56:44.416579 | orchestrator | 2026-03-07 00:56:44 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:56:44.418418 | orchestrator | 2026-03-07 00:56:44 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:56:44.418466 | orchestrator | 2026-03-07 00:56:44 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:56:47.456387 | orchestrator | 2026-03-07 00:56:47 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:56:47.457710 | orchestrator | 2026-03-07 00:56:47 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:56:47.457867 | orchestrator | 2026-03-07 00:56:47 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:56:50.502192 | orchestrator | 2026-03-07 00:56:50 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:56:50.504982 | orchestrator | 2026-03-07 00:56:50 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:56:50.505058 | orchestrator | 2026-03-07 00:56:50 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:56:53.547035 | orchestrator | 2026-03-07 00:56:53 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:56:53.548894 | orchestrator | 2026-03-07 00:56:53 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:56:53.548962 | orchestrator | 2026-03-07 00:56:53 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:56:56.589030 | orchestrator | 2026-03-07 00:56:56 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:56:56.589314 | orchestrator | 2026-03-07 00:56:56 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:56:56.589348 | orchestrator | 2026-03-07 00:56:56 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:56:59.618411 | orchestrator | 2026-03-07 00:56:59 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state STARTED 2026-03-07 00:56:59.619935 | orchestrator | 2026-03-07 00:56:59 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:56:59.620008 | orchestrator | 2026-03-07 00:56:59 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:57:02.654956 | orchestrator | 2026-03-07 00:57:02 | INFO  | Task a8b4a5a2-b965-43a3-bd0e-1f7d24c29831 is in state STARTED 2026-03-07 00:57:02.661639 | orchestrator | 2026-03-07 00:57:02.661756 | orchestrator | 2026-03-07 00:57:02 | INFO  | Task 4adcf673-02f4-4f18-a022-f7a5224f3f1f is in state SUCCESS 2026-03-07 00:57:02.663636 | orchestrator | 2026-03-07 00:57:02.663726 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-07 00:57:02.663743 | orchestrator | 2026-03-07 00:57:02.663755 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-07 00:57:02.663797 | orchestrator | Saturday 07 March 2026 00:49:22 +0000 (0:00:00.477) 0:00:00.477 ******** 2026-03-07 00:57:02.663810 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:57:02.663823 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:57:02.663834 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:57:02.663846 | orchestrator | 2026-03-07 00:57:02.663857 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-07 00:57:02.663868 | orchestrator | Saturday 07 March 2026 00:49:23 +0000 (0:00:00.558) 0:00:01.035 ******** 2026-03-07 00:57:02.663880 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-03-07 00:57:02.663891 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-03-07 00:57:02.663902 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-03-07 00:57:02.663914 | orchestrator | 2026-03-07 00:57:02.663925 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-03-07 00:57:02.663971 | orchestrator | 2026-03-07 00:57:02.663985 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-07 00:57:02.663999 | orchestrator | Saturday 07 March 2026 00:49:23 +0000 (0:00:00.811) 0:00:01.847 ******** 2026-03-07 00:57:02.664013 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:57:02.664090 | orchestrator | 2026-03-07 00:57:02.664105 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-03-07 00:57:02.664147 | orchestrator | Saturday 07 March 2026 00:49:24 +0000 (0:00:00.811) 0:00:02.659 ******** 2026-03-07 00:57:02.664162 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:57:02.664175 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:57:02.664226 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:57:02.664330 | orchestrator | 2026-03-07 00:57:02.664343 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-07 00:57:02.664358 | orchestrator | Saturday 07 March 2026 00:49:25 +0000 (0:00:00.770) 0:00:03.429 ******** 2026-03-07 00:57:02.664371 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:57:02.664384 | orchestrator | 2026-03-07 00:57:02.664395 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-03-07 00:57:02.664405 | orchestrator | Saturday 07 March 2026 00:49:26 +0000 (0:00:00.932) 0:00:04.362 ******** 2026-03-07 00:57:02.664416 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:57:02.664427 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:57:02.664438 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:57:02.664448 | orchestrator | 2026-03-07 00:57:02.664459 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-03-07 00:57:02.664470 | orchestrator | Saturday 07 March 2026 00:49:27 +0000 (0:00:00.762) 0:00:05.124 ******** 2026-03-07 00:57:02.664481 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-07 00:57:02.664492 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-07 00:57:02.664504 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-07 00:57:02.664515 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-07 00:57:02.664525 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-07 00:57:02.664536 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-07 00:57:02.664546 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-07 00:57:02.664559 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-07 00:57:02.664570 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-07 00:57:02.664580 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-07 00:57:02.664591 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-07 00:57:02.664629 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-07 00:57:02.664681 | orchestrator | 2026-03-07 00:57:02.664693 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-07 00:57:02.664703 | orchestrator | Saturday 07 March 2026 00:49:31 +0000 (0:00:04.013) 0:00:09.137 ******** 2026-03-07 00:57:02.664866 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-07 00:57:02.664905 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-07 00:57:02.664917 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-07 00:57:02.664975 | orchestrator | 2026-03-07 00:57:02.664987 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-07 00:57:02.665012 | orchestrator | Saturday 07 March 2026 00:49:32 +0000 (0:00:01.031) 0:00:10.169 ******** 2026-03-07 00:57:02.665023 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-07 00:57:02.665034 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-07 00:57:02.665045 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-07 00:57:02.665056 | orchestrator | 2026-03-07 00:57:02.665067 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-07 00:57:02.665078 | orchestrator | Saturday 07 March 2026 00:49:34 +0000 (0:00:02.075) 0:00:12.245 ******** 2026-03-07 00:57:02.665089 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-03-07 00:57:02.665100 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.665198 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-03-07 00:57:02.665231 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.665250 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-03-07 00:57:02.665267 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.665284 | orchestrator | 2026-03-07 00:57:02.665303 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-03-07 00:57:02.665318 | orchestrator | Saturday 07 March 2026 00:49:37 +0000 (0:00:03.302) 0:00:15.548 ******** 2026-03-07 00:57:02.665349 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-07 00:57:02.665381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-07 00:57:02.665402 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-07 00:57:02.665419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-07 00:57:02.665439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-07 00:57:02.665486 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-07 00:57:02.665507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-07 00:57:02.665596 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-07 00:57:02.665616 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-07 00:57:02.665635 | orchestrator | 2026-03-07 00:57:02.665654 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-03-07 00:57:02.665674 | orchestrator | Saturday 07 March 2026 00:49:40 +0000 (0:00:02.832) 0:00:18.380 ******** 2026-03-07 00:57:02.666132 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:57:02.666159 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:57:02.666178 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:57:02.666197 | orchestrator | 2026-03-07 00:57:02.666217 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-03-07 00:57:02.666234 | orchestrator | Saturday 07 March 2026 00:49:42 +0000 (0:00:02.191) 0:00:20.572 ******** 2026-03-07 00:57:02.666275 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-03-07 00:57:02.666288 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-03-07 00:57:02.666299 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-03-07 00:57:02.666310 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-03-07 00:57:02.666321 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-03-07 00:57:02.666332 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-03-07 00:57:02.666359 | orchestrator | 2026-03-07 00:57:02.666371 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-03-07 00:57:02.666450 | orchestrator | Saturday 07 March 2026 00:49:44 +0000 (0:00:02.199) 0:00:22.772 ******** 2026-03-07 00:57:02.666462 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:57:02.666474 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:57:02.666484 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:57:02.666496 | orchestrator | 2026-03-07 00:57:02.666506 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-03-07 00:57:02.666518 | orchestrator | Saturday 07 March 2026 00:49:46 +0000 (0:00:01.596) 0:00:24.368 ******** 2026-03-07 00:57:02.666529 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:57:02.666540 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:57:02.666551 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:57:02.666562 | orchestrator | 2026-03-07 00:57:02.666573 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-03-07 00:57:02.666584 | orchestrator | Saturday 07 March 2026 00:49:51 +0000 (0:00:04.724) 0:00:29.092 ******** 2026-03-07 00:57:02.666597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-07 00:57:02.666683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-07 00:57:02.666708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-07 00:57:02.666722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__32992cdb2530662cbcaefce100e14d1ba1a27a3e', '__omit_place_holder__32992cdb2530662cbcaefce100e14d1ba1a27a3e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-07 00:57:02.666734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-07 00:57:02.666754 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.666795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-07 00:57:02.666818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-07 00:57:02.666850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__32992cdb2530662cbcaefce100e14d1ba1a27a3e', '__omit_place_holder__32992cdb2530662cbcaefce100e14d1ba1a27a3e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-07 00:57:02.666869 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.666888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-07 00:57:02.666907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-07 00:57:02.666919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-07 00:57:02.666939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__32992cdb2530662cbcaefce100e14d1ba1a27a3e', '__omit_place_holder__32992cdb2530662cbcaefce100e14d1ba1a27a3e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-07 00:57:02.666951 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.666962 | orchestrator | 2026-03-07 00:57:02.666973 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-03-07 00:57:02.666985 | orchestrator | Saturday 07 March 2026 00:49:52 +0000 (0:00:01.004) 0:00:30.096 ******** 2026-03-07 00:57:02.666996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-07 00:57:02.667015 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-07 00:57:02.667027 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-07 00:57:02.667044 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-07 00:57:02.667068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-07 00:57:02.667086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__32992cdb2530662cbcaefce100e14d1ba1a27a3e', '__omit_place_holder__32992cdb2530662cbcaefce100e14d1ba1a27a3e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-07 00:57:02.667255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-07 00:57:02.667275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-07 00:57:02.667308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-07 00:57:02.667482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-07 00:57:02.667512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__32992cdb2530662cbcaefce100e14d1ba1a27a3e', '__omit_place_holder__32992cdb2530662cbcaefce100e14d1ba1a27a3e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-07 00:57:02.667546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__32992cdb2530662cbcaefce100e14d1ba1a27a3e', '__omit_place_holder__32992cdb2530662cbcaefce100e14d1ba1a27a3e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-07 00:57:02.667567 | orchestrator | 2026-03-07 00:57:02.667586 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-03-07 00:57:02.667605 | orchestrator | Saturday 07 March 2026 00:49:56 +0000 (0:00:04.384) 0:00:34.481 ******** 2026-03-07 00:57:02.667619 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-07 00:57:02.667631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-07 00:57:02.667654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-07 00:57:02.667717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-07 00:57:02.667740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-07 00:57:02.667753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-07 00:57:02.667765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-07 00:57:02.667799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-07 00:57:02.667811 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-07 00:57:02.667880 | orchestrator | 2026-03-07 00:57:02.667893 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-03-07 00:57:02.668023 | orchestrator | Saturday 07 March 2026 00:50:01 +0000 (0:00:05.302) 0:00:39.784 ******** 2026-03-07 00:57:02.668035 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-07 00:57:02.668055 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-07 00:57:02.668067 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-07 00:57:02.668078 | orchestrator | 2026-03-07 00:57:02.668089 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-03-07 00:57:02.668100 | orchestrator | Saturday 07 March 2026 00:50:04 +0000 (0:00:02.201) 0:00:41.986 ******** 2026-03-07 00:57:02.668139 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-07 00:57:02.668151 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-07 00:57:02.668171 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-07 00:57:02.668182 | orchestrator | 2026-03-07 00:57:02.668194 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-03-07 00:57:02.668211 | orchestrator | Saturday 07 March 2026 00:50:11 +0000 (0:00:06.993) 0:00:48.979 ******** 2026-03-07 00:57:02.668223 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.668235 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.668246 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.668269 | orchestrator | 2026-03-07 00:57:02.668281 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-03-07 00:57:02.668292 | orchestrator | Saturday 07 March 2026 00:50:12 +0000 (0:00:01.437) 0:00:50.416 ******** 2026-03-07 00:57:02.668304 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-07 00:57:02.668316 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-07 00:57:02.668327 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-07 00:57:02.668338 | orchestrator | 2026-03-07 00:57:02.668349 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-03-07 00:57:02.668360 | orchestrator | Saturday 07 March 2026 00:50:17 +0000 (0:00:05.404) 0:00:55.820 ******** 2026-03-07 00:57:02.668371 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-07 00:57:02.668453 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-07 00:57:02.668465 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-07 00:57:02.668477 | orchestrator | 2026-03-07 00:57:02.668488 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-03-07 00:57:02.668577 | orchestrator | Saturday 07 March 2026 00:50:23 +0000 (0:00:05.425) 0:01:01.248 ******** 2026-03-07 00:57:02.668588 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-03-07 00:57:02.668612 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-03-07 00:57:02.668624 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-03-07 00:57:02.668635 | orchestrator | 2026-03-07 00:57:02.668646 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-03-07 00:57:02.668657 | orchestrator | Saturday 07 March 2026 00:50:26 +0000 (0:00:03.000) 0:01:04.249 ******** 2026-03-07 00:57:02.668694 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-03-07 00:57:02.668707 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-03-07 00:57:02.668718 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-03-07 00:57:02.668729 | orchestrator | 2026-03-07 00:57:02.668740 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-07 00:57:02.668750 | orchestrator | Saturday 07 March 2026 00:50:30 +0000 (0:00:03.994) 0:01:08.243 ******** 2026-03-07 00:57:02.668761 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:57:02.668794 | orchestrator | 2026-03-07 00:57:02.668805 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-03-07 00:57:02.668816 | orchestrator | Saturday 07 March 2026 00:50:31 +0000 (0:00:01.226) 0:01:09.470 ******** 2026-03-07 00:57:02.668828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-07 00:57:02.668859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-07 00:57:02.668879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-07 00:57:02.668892 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-07 00:57:02.668903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-07 00:57:02.668915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-07 00:57:02.668927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-07 00:57:02.668945 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-07 00:57:02.668964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-07 00:57:02.668976 | orchestrator | 2026-03-07 00:57:02.668988 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-03-07 00:57:02.668999 | orchestrator | Saturday 07 March 2026 00:50:35 +0000 (0:00:03.876) 0:01:13.347 ******** 2026-03-07 00:57:02.669016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-07 00:57:02.669113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-07 00:57:02.669128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-07 00:57:02.669139 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.669151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-07 00:57:02.669171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-07 00:57:02.669285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-07 00:57:02.669298 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.669316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-07 00:57:02.669328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-07 00:57:02.669340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-07 00:57:02.669351 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.669363 | orchestrator | 2026-03-07 00:57:02.669374 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-03-07 00:57:02.669385 | orchestrator | Saturday 07 March 2026 00:50:36 +0000 (0:00:00.875) 0:01:14.222 ******** 2026-03-07 00:57:02.669397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-07 00:57:02.669416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-07 00:57:02.669434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-07 00:57:02.669446 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.669458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-07 00:57:02.669503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-07 00:57:02.669517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-07 00:57:02.669528 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.669594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-07 00:57:02.669616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-07 00:57:02.669628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-07 00:57:02.669639 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.669650 | orchestrator | 2026-03-07 00:57:02.669662 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-07 00:57:02.669673 | orchestrator | Saturday 07 March 2026 00:50:37 +0000 (0:00:01.127) 0:01:15.350 ******** 2026-03-07 00:57:02.669693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-07 00:57:02.669711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-07 00:57:02.669748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-07 00:57:02.669761 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.669889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-07 00:57:02.669912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-07 00:57:02.670199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-07 00:57:02.670230 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.670264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-07 00:57:02.670299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-07 00:57:02.670309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-07 00:57:02.670317 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.670325 | orchestrator | 2026-03-07 00:57:02.670334 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-07 00:57:02.670342 | orchestrator | Saturday 07 March 2026 00:50:38 +0000 (0:00:01.111) 0:01:16.461 ******** 2026-03-07 00:57:02.670350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-07 00:57:02.670367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-07 00:57:02.670376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-07 00:57:02.670384 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.670393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-07 00:57:02.670410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-07 00:57:02.670424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-07 00:57:02.670433 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.670441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-07 00:57:02.670455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-07 00:57:02.670463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-07 00:57:02.670472 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.670480 | orchestrator | 2026-03-07 00:57:02.670490 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-07 00:57:02.670505 | orchestrator | Saturday 07 March 2026 00:50:39 +0000 (0:00:00.800) 0:01:17.262 ******** 2026-03-07 00:57:02.670519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-07 00:57:02.670540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-07 00:57:02.670560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-07 00:57:02.670574 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.670588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-07 00:57:02.670612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-07 00:57:02.670626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-07 00:57:02.670640 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.670652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-07 00:57:02.670846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-07 00:57:02.670858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-07 00:57:02.670867 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.670875 | orchestrator | 2026-03-07 00:57:02.670883 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-03-07 00:57:02.670891 | orchestrator | Saturday 07 March 2026 00:50:40 +0000 (0:00:01.325) 0:01:18.587 ******** 2026-03-07 00:57:02.670905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-07 00:57:02.670921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-07 00:57:02.670929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-07 00:57:02.670937 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.670945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-07 00:57:02.670954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-07 00:57:02.670968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-07 00:57:02.670977 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.670986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-07 00:57:02.670999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-07 00:57:02.671008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-07 00:57:02.671016 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.671024 | orchestrator | 2026-03-07 00:57:02.671032 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-03-07 00:57:02.671040 | orchestrator | Saturday 07 March 2026 00:50:42 +0000 (0:00:01.729) 0:01:20.316 ******** 2026-03-07 00:57:02.671048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-07 00:57:02.671056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-07 00:57:02.671124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-07 00:57:02.671147 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.671159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-07 00:57:02.671174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-07 00:57:02.671182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-07 00:57:02.671190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-07 00:57:02.671199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-07 00:57:02.671207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-07 00:57:02.671215 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.671223 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.671231 | orchestrator | 2026-03-07 00:57:02.671239 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-03-07 00:57:02.671254 | orchestrator | Saturday 07 March 2026 00:50:43 +0000 (0:00:01.150) 0:01:21.467 ******** 2026-03-07 00:57:02.671262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-07 00:57:02.671282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-07 00:57:02.671291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-07 00:57:02.671299 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.671308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-07 00:57:02.671316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-07 00:57:02.671325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-07 00:57:02.671439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-07 00:57:02.671462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-07 00:57:02.671471 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.671479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-07 00:57:02.671488 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.671496 | orchestrator | 2026-03-07 00:57:02.671504 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-03-07 00:57:02.671512 | orchestrator | Saturday 07 March 2026 00:50:44 +0000 (0:00:01.218) 0:01:22.686 ******** 2026-03-07 00:57:02.671520 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-07 00:57:02.671529 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-07 00:57:02.671572 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-07 00:57:02.671582 | orchestrator | 2026-03-07 00:57:02.671590 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-03-07 00:57:02.671598 | orchestrator | Saturday 07 March 2026 00:50:47 +0000 (0:00:02.797) 0:01:25.483 ******** 2026-03-07 00:57:02.671606 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-07 00:57:02.671613 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-07 00:57:02.671625 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-07 00:57:02.671638 | orchestrator | 2026-03-07 00:57:02.671651 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-03-07 00:57:02.671664 | orchestrator | Saturday 07 March 2026 00:50:49 +0000 (0:00:01.992) 0:01:27.476 ******** 2026-03-07 00:57:02.671677 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-07 00:57:02.671690 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-07 00:57:02.671703 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-07 00:57:02.671715 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-07 00:57:02.671728 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.671740 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-07 00:57:02.671762 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.671802 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-07 00:57:02.671815 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.671828 | orchestrator | 2026-03-07 00:57:02.671841 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-03-07 00:57:02.671855 | orchestrator | Saturday 07 March 2026 00:50:50 +0000 (0:00:01.114) 0:01:28.590 ******** 2026-03-07 00:57:02.671877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-07 00:57:02.671900 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-07 00:57:02.671991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-07 00:57:02.672007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-07 00:57:02.672022 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-07 00:57:02.672074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-07 00:57:02.672122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-07 00:57:02.672179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-07 00:57:02.672193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-07 00:57:02.672202 | orchestrator | 2026-03-07 00:57:02.672210 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-03-07 00:57:02.672218 | orchestrator | Saturday 07 March 2026 00:50:54 +0000 (0:00:03.515) 0:01:32.105 ******** 2026-03-07 00:57:02.672226 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:57:02.672234 | orchestrator | 2026-03-07 00:57:02.672242 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-03-07 00:57:02.672250 | orchestrator | Saturday 07 March 2026 00:50:55 +0000 (0:00:00.786) 0:01:32.892 ******** 2026-03-07 00:57:02.672260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-07 00:57:02.672270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-07 00:57:02.672284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.672293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.672307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-07 00:57:02.672320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-07 00:57:02.672329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-07 00:57:02.672337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.672351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-07 00:57:02.672359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.672373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.672389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.672398 | orchestrator | 2026-03-07 00:57:02.672406 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-03-07 00:57:02.672414 | orchestrator | Saturday 07 March 2026 00:51:00 +0000 (0:00:05.295) 0:01:38.187 ******** 2026-03-07 00:57:02.672422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-07 00:57:02.672437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-07 00:57:02.672507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.672517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.672526 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.672541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-07 00:57:02.672554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-07 00:57:02.672563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.672571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.672585 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.672594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-07 00:57:02.672603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-07 00:57:02.672616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.672628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.672637 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.672645 | orchestrator | 2026-03-07 00:57:02.672653 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-03-07 00:57:02.672688 | orchestrator | Saturday 07 March 2026 00:51:01 +0000 (0:00:01.487) 0:01:39.675 ******** 2026-03-07 00:57:02.672706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-07 00:57:02.672717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-07 00:57:02.672731 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.672802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-07 00:57:02.672811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-07 00:57:02.672819 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.672827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-07 00:57:02.672835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-07 00:57:02.672843 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.672852 | orchestrator | 2026-03-07 00:57:02.672860 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-03-07 00:57:02.672868 | orchestrator | Saturday 07 March 2026 00:51:04 +0000 (0:00:02.946) 0:01:42.621 ******** 2026-03-07 00:57:02.672876 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:57:02.672884 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:57:02.672891 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:57:02.672919 | orchestrator | 2026-03-07 00:57:02.672928 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-03-07 00:57:02.672936 | orchestrator | Saturday 07 March 2026 00:51:06 +0000 (0:00:01.698) 0:01:44.320 ******** 2026-03-07 00:57:02.672944 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:57:02.672952 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:57:02.672959 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:57:02.672967 | orchestrator | 2026-03-07 00:57:02.672975 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-03-07 00:57:02.672983 | orchestrator | Saturday 07 March 2026 00:51:08 +0000 (0:00:02.500) 0:01:46.821 ******** 2026-03-07 00:57:02.672991 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:57:02.672999 | orchestrator | 2026-03-07 00:57:02.673007 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-03-07 00:57:02.673015 | orchestrator | Saturday 07 March 2026 00:51:09 +0000 (0:00:01.052) 0:01:47.873 ******** 2026-03-07 00:57:02.673031 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-07 00:57:02.673045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.673061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.673070 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-07 00:57:02.673078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.673087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.673101 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-07 00:57:02.673119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.673128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.673136 | orchestrator | 2026-03-07 00:57:02.673189 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-03-07 00:57:02.673264 | orchestrator | Saturday 07 March 2026 00:51:14 +0000 (0:00:04.617) 0:01:52.490 ******** 2026-03-07 00:57:02.673273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-07 00:57:02.673281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.673297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.673305 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.673318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-07 00:57:02.673334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.673343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.673351 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.673360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-07 00:57:02.673372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.673381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.673394 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.673416 | orchestrator | 2026-03-07 00:57:02.673425 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-03-07 00:57:02.673437 | orchestrator | Saturday 07 March 2026 00:51:15 +0000 (0:00:01.327) 0:01:53.818 ******** 2026-03-07 00:57:02.673445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-07 00:57:02.673453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-07 00:57:02.673462 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.673470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-07 00:57:02.673478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-07 00:57:02.673486 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.673494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-07 00:57:02.673503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-07 00:57:02.673511 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.673519 | orchestrator | 2026-03-07 00:57:02.673526 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-03-07 00:57:02.673534 | orchestrator | Saturday 07 March 2026 00:51:17 +0000 (0:00:01.595) 0:01:55.414 ******** 2026-03-07 00:57:02.673542 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:57:02.673550 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:57:02.673558 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:57:02.673566 | orchestrator | 2026-03-07 00:57:02.673574 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-03-07 00:57:02.673582 | orchestrator | Saturday 07 March 2026 00:51:19 +0000 (0:00:01.699) 0:01:57.113 ******** 2026-03-07 00:57:02.673589 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:57:02.673597 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:57:02.673605 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:57:02.673613 | orchestrator | 2026-03-07 00:57:02.673621 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-03-07 00:57:02.673629 | orchestrator | Saturday 07 March 2026 00:51:21 +0000 (0:00:02.698) 0:01:59.812 ******** 2026-03-07 00:57:02.673664 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.673674 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.673682 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.673690 | orchestrator | 2026-03-07 00:57:02.673698 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-03-07 00:57:02.673735 | orchestrator | Saturday 07 March 2026 00:51:22 +0000 (0:00:00.481) 0:02:00.294 ******** 2026-03-07 00:57:02.673749 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:57:02.673758 | orchestrator | 2026-03-07 00:57:02.673765 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-03-07 00:57:02.673825 | orchestrator | Saturday 07 March 2026 00:51:23 +0000 (0:00:01.347) 0:02:01.641 ******** 2026-03-07 00:57:02.673840 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-07 00:57:02.673858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-07 00:57:02.673867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-07 00:57:02.673875 | orchestrator | 2026-03-07 00:57:02.673883 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-03-07 00:57:02.673894 | orchestrator | Saturday 07 March 2026 00:51:27 +0000 (0:00:03.527) 0:02:05.168 ******** 2026-03-07 00:57:02.673907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-07 00:57:02.673935 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.673955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-07 00:57:02.673969 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.673990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-07 00:57:02.674005 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.674054 | orchestrator | 2026-03-07 00:57:02.674071 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-03-07 00:57:02.674092 | orchestrator | Saturday 07 March 2026 00:51:30 +0000 (0:00:03.461) 0:02:08.629 ******** 2026-03-07 00:57:02.674107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-07 00:57:02.674121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-07 00:57:02.674135 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.674146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-07 00:57:02.674157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-07 00:57:02.674171 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.674179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-07 00:57:02.674185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-07 00:57:02.674192 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.674199 | orchestrator | 2026-03-07 00:57:02.674206 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-03-07 00:57:02.674212 | orchestrator | Saturday 07 March 2026 00:51:34 +0000 (0:00:03.591) 0:02:12.221 ******** 2026-03-07 00:57:02.674219 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.674226 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.674232 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.674239 | orchestrator | 2026-03-07 00:57:02.674245 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-03-07 00:57:02.674252 | orchestrator | Saturday 07 March 2026 00:51:35 +0000 (0:00:00.998) 0:02:13.220 ******** 2026-03-07 00:57:02.674259 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.674266 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.674272 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.674279 | orchestrator | 2026-03-07 00:57:02.674286 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-03-07 00:57:02.674299 | orchestrator | Saturday 07 March 2026 00:51:36 +0000 (0:00:01.527) 0:02:14.747 ******** 2026-03-07 00:57:02.674306 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:57:02.674312 | orchestrator | 2026-03-07 00:57:02.674319 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-03-07 00:57:02.674325 | orchestrator | Saturday 07 March 2026 00:51:37 +0000 (0:00:00.916) 0:02:15.663 ******** 2026-03-07 00:57:02.674336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-07 00:57:02.674349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.674368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.674380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.674398 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-07 00:57:02.674409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.674425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.674437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.674454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-07 00:57:02.674465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.674482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.674499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.674510 | orchestrator | 2026-03-07 00:57:02.674521 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-03-07 00:57:02.674532 | orchestrator | Saturday 07 March 2026 00:51:45 +0000 (0:00:07.811) 0:02:23.475 ******** 2026-03-07 00:57:02.674543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-07 00:57:02.674562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.674574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.674602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.674610 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.674621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-07 00:57:02.674629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.674640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.674647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.674654 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.674661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-07 00:57:02.674674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.674685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.674697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.674704 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.674711 | orchestrator | 2026-03-07 00:57:02.674719 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-03-07 00:57:02.674725 | orchestrator | Saturday 07 March 2026 00:51:47 +0000 (0:00:01.831) 0:02:25.307 ******** 2026-03-07 00:57:02.674733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-07 00:57:02.674740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-07 00:57:02.674749 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.674756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-07 00:57:02.674762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-07 00:57:02.674790 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.674798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-07 00:57:02.674804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-07 00:57:02.674811 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.674818 | orchestrator | 2026-03-07 00:57:02.674825 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-03-07 00:57:02.674831 | orchestrator | Saturday 07 March 2026 00:51:49 +0000 (0:00:02.178) 0:02:27.485 ******** 2026-03-07 00:57:02.674838 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:57:02.674845 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:57:02.674852 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:57:02.674861 | orchestrator | 2026-03-07 00:57:02.674871 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-03-07 00:57:02.674881 | orchestrator | Saturday 07 March 2026 00:51:51 +0000 (0:00:01.827) 0:02:29.313 ******** 2026-03-07 00:57:02.674921 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:57:02.674934 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:57:02.674944 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:57:02.674954 | orchestrator | 2026-03-07 00:57:02.674971 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-03-07 00:57:02.674981 | orchestrator | Saturday 07 March 2026 00:51:54 +0000 (0:00:02.591) 0:02:31.904 ******** 2026-03-07 00:57:02.674991 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.675011 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.675022 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.675034 | orchestrator | 2026-03-07 00:57:02.675045 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-03-07 00:57:02.675055 | orchestrator | Saturday 07 March 2026 00:51:54 +0000 (0:00:00.671) 0:02:32.576 ******** 2026-03-07 00:57:02.675067 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.675078 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.675089 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.675099 | orchestrator | 2026-03-07 00:57:02.675110 | orchestrator | TASK [include_role : designate] ************************************************ 2026-03-07 00:57:02.675120 | orchestrator | Saturday 07 March 2026 00:51:55 +0000 (0:00:00.355) 0:02:32.931 ******** 2026-03-07 00:57:02.675129 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:57:02.675141 | orchestrator | 2026-03-07 00:57:02.675152 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-03-07 00:57:02.675164 | orchestrator | Saturday 07 March 2026 00:51:55 +0000 (0:00:00.875) 0:02:33.806 ******** 2026-03-07 00:57:02.675176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-07 00:57:02.675234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-07 00:57:02.675252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.675260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.675280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.675292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.675299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.675306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-07 00:57:02.675313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-07 00:57:02.675320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.675336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.675343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.675354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.675361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.675368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-07 00:57:02.675375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-07 00:57:02.675386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.675397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.675408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.675415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.675422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.675429 | orchestrator | 2026-03-07 00:57:02.675436 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-03-07 00:57:02.675443 | orchestrator | Saturday 07 March 2026 00:52:01 +0000 (0:00:05.740) 0:02:39.547 ******** 2026-03-07 00:57:02.675450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-07 00:57:02.675465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-07 00:57:02.675472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.675483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.675491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-07 00:57:02.675498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.675505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-07 00:57:02.675516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.675528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.675538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.675546 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.675553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.675560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.675567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.675578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.675585 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.675596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-07 00:57:02.675608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-07 00:57:02.675616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.675623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.675630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.675641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.675653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.675660 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.675667 | orchestrator | 2026-03-07 00:57:02.675673 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-03-07 00:57:02.675680 | orchestrator | Saturday 07 March 2026 00:52:03 +0000 (0:00:01.548) 0:02:41.096 ******** 2026-03-07 00:57:02.675688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-07 00:57:02.675695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-07 00:57:02.675702 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.675714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-07 00:57:02.675721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-07 00:57:02.675728 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.675735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-07 00:57:02.675742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-07 00:57:02.675749 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.675756 | orchestrator | 2026-03-07 00:57:02.675762 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-03-07 00:57:02.675814 | orchestrator | Saturday 07 March 2026 00:52:04 +0000 (0:00:01.306) 0:02:42.403 ******** 2026-03-07 00:57:02.675822 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:57:02.675828 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:57:02.675835 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:57:02.675842 | orchestrator | 2026-03-07 00:57:02.675849 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-03-07 00:57:02.675861 | orchestrator | Saturday 07 March 2026 00:52:07 +0000 (0:00:02.731) 0:02:45.135 ******** 2026-03-07 00:57:02.675868 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:57:02.675875 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:57:02.675881 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:57:02.675888 | orchestrator | 2026-03-07 00:57:02.675895 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-03-07 00:57:02.675902 | orchestrator | Saturday 07 March 2026 00:52:09 +0000 (0:00:02.108) 0:02:47.243 ******** 2026-03-07 00:57:02.675909 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.675915 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.675922 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.675929 | orchestrator | 2026-03-07 00:57:02.675936 | orchestrator | TASK [include_role : glance] *************************************************** 2026-03-07 00:57:02.675942 | orchestrator | Saturday 07 March 2026 00:52:10 +0000 (0:00:00.684) 0:02:47.928 ******** 2026-03-07 00:57:02.675949 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:57:02.675956 | orchestrator | 2026-03-07 00:57:02.675962 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-03-07 00:57:02.675969 | orchestrator | Saturday 07 March 2026 00:52:11 +0000 (0:00:00.967) 0:02:48.896 ******** 2026-03-07 00:57:02.675984 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-07 00:57:02.675998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-07 00:57:02.676011 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-07 00:57:02.676028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-07 00:57:02.676040 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-07 00:57:02.676057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-07 00:57:02.676069 | orchestrator | 2026-03-07 00:57:02.676076 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-03-07 00:57:02.676083 | orchestrator | Saturday 07 March 2026 00:52:16 +0000 (0:00:05.844) 0:02:54.740 ******** 2026-03-07 00:57:02.676090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-07 00:57:02.676112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-07 00:57:02.676125 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.676151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-07 00:57:02.676173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-07 00:57:02.676185 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.676201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-07 00:57:02.676225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-07 00:57:02.676236 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.676248 | orchestrator | 2026-03-07 00:57:02.676259 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-03-07 00:57:02.676268 | orchestrator | Saturday 07 March 2026 00:52:21 +0000 (0:00:04.259) 0:02:59.000 ******** 2026-03-07 00:57:02.676284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-07 00:57:02.676303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-07 00:57:02.676314 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.676325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-07 00:57:02.676337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-07 00:57:02.676348 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.676358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-07 00:57:02.676369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-07 00:57:02.676379 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.676390 | orchestrator | 2026-03-07 00:57:02.676399 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-03-07 00:57:02.676409 | orchestrator | Saturday 07 March 2026 00:52:25 +0000 (0:00:03.999) 0:03:03.000 ******** 2026-03-07 00:57:02.676419 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:57:02.676426 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:57:02.676432 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:57:02.676438 | orchestrator | 2026-03-07 00:57:02.676444 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-03-07 00:57:02.676452 | orchestrator | Saturday 07 March 2026 00:52:26 +0000 (0:00:01.488) 0:03:04.489 ******** 2026-03-07 00:57:02.676463 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:57:02.676473 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:57:02.676483 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:57:02.676493 | orchestrator | 2026-03-07 00:57:02.676672 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-03-07 00:57:02.676687 | orchestrator | Saturday 07 March 2026 00:52:29 +0000 (0:00:02.714) 0:03:07.203 ******** 2026-03-07 00:57:02.676703 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.676709 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.676716 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.676722 | orchestrator | 2026-03-07 00:57:02.676728 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-03-07 00:57:02.676734 | orchestrator | Saturday 07 March 2026 00:52:30 +0000 (0:00:00.854) 0:03:08.058 ******** 2026-03-07 00:57:02.676740 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:57:02.676746 | orchestrator | 2026-03-07 00:57:02.676752 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-03-07 00:57:02.676759 | orchestrator | Saturday 07 March 2026 00:52:31 +0000 (0:00:01.019) 0:03:09.077 ******** 2026-03-07 00:57:02.676798 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-07 00:57:02.676808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-07 00:57:02.676815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-07 00:57:02.676821 | orchestrator | 2026-03-07 00:57:02.676827 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-03-07 00:57:02.676834 | orchestrator | Saturday 07 March 2026 00:52:34 +0000 (0:00:03.636) 0:03:12.714 ******** 2026-03-07 00:57:02.676841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-07 00:57:02.676847 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.676865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-07 00:57:02.676872 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.676883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-07 00:57:02.676889 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.676896 | orchestrator | 2026-03-07 00:57:02.676902 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-03-07 00:57:02.676909 | orchestrator | Saturday 07 March 2026 00:52:35 +0000 (0:00:00.827) 0:03:13.541 ******** 2026-03-07 00:57:02.676920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-07 00:57:02.676931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-07 00:57:02.676942 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.676953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-07 00:57:02.676962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-07 00:57:02.676972 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.676981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-07 00:57:02.676989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-07 00:57:02.676998 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.677009 | orchestrator | 2026-03-07 00:57:02.677018 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-03-07 00:57:02.677027 | orchestrator | Saturday 07 March 2026 00:52:36 +0000 (0:00:00.761) 0:03:14.303 ******** 2026-03-07 00:57:02.677036 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:57:02.677045 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:57:02.677054 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:57:02.677062 | orchestrator | 2026-03-07 00:57:02.677071 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-03-07 00:57:02.677080 | orchestrator | Saturday 07 March 2026 00:52:38 +0000 (0:00:01.621) 0:03:15.924 ******** 2026-03-07 00:57:02.677097 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:57:02.677106 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:57:02.677115 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:57:02.677124 | orchestrator | 2026-03-07 00:57:02.677134 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-03-07 00:57:02.677143 | orchestrator | Saturday 07 March 2026 00:52:40 +0000 (0:00:02.489) 0:03:18.414 ******** 2026-03-07 00:57:02.677152 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.677163 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.677173 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.677184 | orchestrator | 2026-03-07 00:57:02.677193 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-03-07 00:57:02.677204 | orchestrator | Saturday 07 March 2026 00:52:41 +0000 (0:00:00.716) 0:03:19.131 ******** 2026-03-07 00:57:02.677213 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:57:02.677223 | orchestrator | 2026-03-07 00:57:02.677233 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-03-07 00:57:02.677242 | orchestrator | Saturday 07 March 2026 00:52:42 +0000 (0:00:00.957) 0:03:20.089 ******** 2026-03-07 00:57:02.677272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-07 00:57:02.677287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-07 00:57:02.677320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-07 00:57:02.677333 | orchestrator | 2026-03-07 00:57:02.677342 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-03-07 00:57:02.677351 | orchestrator | Saturday 07 March 2026 00:52:46 +0000 (0:00:04.452) 0:03:24.541 ******** 2026-03-07 00:57:02.677369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-07 00:57:02.677388 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.677405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-07 00:57:02.677423 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.677440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-07 00:57:02.677453 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.677463 | orchestrator | 2026-03-07 00:57:02.677474 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-03-07 00:57:02.677489 | orchestrator | Saturday 07 March 2026 00:52:48 +0000 (0:00:01.658) 0:03:26.200 ******** 2026-03-07 00:57:02.677500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-07 00:57:02.677512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-07 00:57:02.677525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-07 00:57:02.677537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-07 00:57:02.677560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-07 00:57:02.677570 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.677581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-07 00:57:02.677593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-07 00:57:02.677604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-07 00:57:02.677614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-07 00:57:02.677625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-07 00:57:02.677642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-07 00:57:02.677652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-07 00:57:02.677662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-07 00:57:02.677677 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.677688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-07 00:57:02.677698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-07 00:57:02.677707 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.677717 | orchestrator | 2026-03-07 00:57:02.677726 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-03-07 00:57:02.677744 | orchestrator | Saturday 07 March 2026 00:52:49 +0000 (0:00:01.419) 0:03:27.619 ******** 2026-03-07 00:57:02.677754 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:57:02.677763 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:57:02.677833 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:57:02.677844 | orchestrator | 2026-03-07 00:57:02.677853 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-03-07 00:57:02.677863 | orchestrator | Saturday 07 March 2026 00:52:51 +0000 (0:00:01.560) 0:03:29.180 ******** 2026-03-07 00:57:02.677874 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:57:02.677883 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:57:02.677892 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:57:02.677902 | orchestrator | 2026-03-07 00:57:02.677911 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-03-07 00:57:02.677921 | orchestrator | Saturday 07 March 2026 00:52:53 +0000 (0:00:02.610) 0:03:31.790 ******** 2026-03-07 00:57:02.677930 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.677940 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.677950 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.677959 | orchestrator | 2026-03-07 00:57:02.677969 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-03-07 00:57:02.677978 | orchestrator | Saturday 07 March 2026 00:52:54 +0000 (0:00:00.536) 0:03:32.327 ******** 2026-03-07 00:57:02.677987 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.677996 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.678004 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.678014 | orchestrator | 2026-03-07 00:57:02.678067 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-03-07 00:57:02.678077 | orchestrator | Saturday 07 March 2026 00:52:55 +0000 (0:00:01.187) 0:03:33.514 ******** 2026-03-07 00:57:02.678087 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:57:02.678095 | orchestrator | 2026-03-07 00:57:02.678104 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-03-07 00:57:02.678112 | orchestrator | Saturday 07 March 2026 00:52:56 +0000 (0:00:01.149) 0:03:34.664 ******** 2026-03-07 00:57:02.678123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-07 00:57:02.678150 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-07 00:57:02.678172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-07 00:57:02.678182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-07 00:57:02.678191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-07 00:57:02.678199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-07 00:57:02.678215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-07 00:57:02.678229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-07 00:57:02.678244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-07 00:57:02.678253 | orchestrator | 2026-03-07 00:57:02.678261 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-03-07 00:57:02.678270 | orchestrator | Saturday 07 March 2026 00:53:02 +0000 (0:00:05.367) 0:03:40.032 ******** 2026-03-07 00:57:02.678278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-07 00:57:02.678287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-07 00:57:02.678296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-07 00:57:02.678305 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.678320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-07 00:57:02.678335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-07 00:57:02.678345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-07 00:57:02.678353 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.678414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-07 00:57:02.678438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-07 00:57:02.678465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-07 00:57:02.678481 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.678489 | orchestrator | 2026-03-07 00:57:02.678497 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-03-07 00:57:02.678506 | orchestrator | Saturday 07 March 2026 00:53:03 +0000 (0:00:00.942) 0:03:40.974 ******** 2026-03-07 00:57:02.678520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-07 00:57:02.678532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-07 00:57:02.678541 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.678548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-07 00:57:02.678557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-07 00:57:02.678566 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.678575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-07 00:57:02.678583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-07 00:57:02.678591 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.678599 | orchestrator | 2026-03-07 00:57:02.678608 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-03-07 00:57:02.678617 | orchestrator | Saturday 07 March 2026 00:53:04 +0000 (0:00:00.971) 0:03:41.946 ******** 2026-03-07 00:57:02.678625 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:57:02.678633 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:57:02.678642 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:57:02.678651 | orchestrator | 2026-03-07 00:57:02.678659 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-03-07 00:57:02.678667 | orchestrator | Saturday 07 March 2026 00:53:05 +0000 (0:00:01.498) 0:03:43.444 ******** 2026-03-07 00:57:02.678676 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:57:02.678684 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:57:02.678692 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:57:02.678700 | orchestrator | 2026-03-07 00:57:02.678709 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-03-07 00:57:02.678718 | orchestrator | Saturday 07 March 2026 00:53:08 +0000 (0:00:02.575) 0:03:46.019 ******** 2026-03-07 00:57:02.678726 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.678735 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.678804 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.678814 | orchestrator | 2026-03-07 00:57:02.678822 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-03-07 00:57:02.678827 | orchestrator | Saturday 07 March 2026 00:53:08 +0000 (0:00:00.738) 0:03:46.758 ******** 2026-03-07 00:57:02.678833 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:57:02.678838 | orchestrator | 2026-03-07 00:57:02.678843 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-03-07 00:57:02.678849 | orchestrator | Saturday 07 March 2026 00:53:10 +0000 (0:00:01.173) 0:03:47.931 ******** 2026-03-07 00:57:02.678862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-07 00:57:02.678874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.678881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-07 00:57:02.678887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.678897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-07 00:57:02.678907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.678913 | orchestrator | 2026-03-07 00:57:02.678919 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-03-07 00:57:02.678924 | orchestrator | Saturday 07 March 2026 00:53:14 +0000 (0:00:04.131) 0:03:52.063 ******** 2026-03-07 00:57:02.678934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-07 00:57:02.678940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.678945 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.678951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-07 00:57:02.678961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.678967 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.678980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-07 00:57:02.678986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.678991 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.678997 | orchestrator | 2026-03-07 00:57:02.679002 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-03-07 00:57:02.679008 | orchestrator | Saturday 07 March 2026 00:53:15 +0000 (0:00:01.094) 0:03:53.157 ******** 2026-03-07 00:57:02.679014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-07 00:57:02.679020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-07 00:57:02.679026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-07 00:57:02.679035 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.679040 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-07 00:57:02.679046 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.679051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-07 00:57:02.679057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-07 00:57:02.679062 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.679068 | orchestrator | 2026-03-07 00:57:02.679073 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-03-07 00:57:02.679079 | orchestrator | Saturday 07 March 2026 00:53:16 +0000 (0:00:01.150) 0:03:54.307 ******** 2026-03-07 00:57:02.679084 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:57:02.679089 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:57:02.679095 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:57:02.679100 | orchestrator | 2026-03-07 00:57:02.679106 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-03-07 00:57:02.679111 | orchestrator | Saturday 07 March 2026 00:53:17 +0000 (0:00:01.563) 0:03:55.871 ******** 2026-03-07 00:57:02.679116 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:57:02.679122 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:57:02.679127 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:57:02.679132 | orchestrator | 2026-03-07 00:57:02.679138 | orchestrator | TASK [include_role : manila] *************************************************** 2026-03-07 00:57:02.679143 | orchestrator | Saturday 07 March 2026 00:53:20 +0000 (0:00:02.538) 0:03:58.409 ******** 2026-03-07 00:57:02.679148 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:57:02.679154 | orchestrator | 2026-03-07 00:57:02.679159 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-03-07 00:57:02.679165 | orchestrator | Saturday 07 March 2026 00:53:22 +0000 (0:00:01.706) 0:04:00.116 ******** 2026-03-07 00:57:02.679178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-07 00:57:02.679184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.679190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.679201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.679207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-07 00:57:02.679216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-07 00:57:02.679224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.679230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.679240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.679246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.679251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.679260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.679266 | orchestrator | 2026-03-07 00:57:02.679271 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-03-07 00:57:02.679277 | orchestrator | Saturday 07 March 2026 00:53:27 +0000 (0:00:04.960) 0:04:05.077 ******** 2026-03-07 00:57:02.679288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-07 00:57:02.679297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.679303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.679308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.679314 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.679320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-07 00:57:02.679329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.679338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.679348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-07 00:57:02.679353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.679359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.679365 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.679371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.679380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.679386 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.679392 | orchestrator | 2026-03-07 00:57:02.679397 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-03-07 00:57:02.679403 | orchestrator | Saturday 07 March 2026 00:53:28 +0000 (0:00:00.910) 0:04:05.987 ******** 2026-03-07 00:57:02.679414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-07 00:57:02.679422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-07 00:57:02.679428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-07 00:57:02.679434 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.679441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-07 00:57:02.679450 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.679457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-07 00:57:02.679463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-07 00:57:02.679468 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.679474 | orchestrator | 2026-03-07 00:57:02.679479 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-03-07 00:57:02.679485 | orchestrator | Saturday 07 March 2026 00:53:30 +0000 (0:00:02.096) 0:04:08.084 ******** 2026-03-07 00:57:02.679490 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:57:02.679495 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:57:02.679501 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:57:02.679506 | orchestrator | 2026-03-07 00:57:02.679512 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-03-07 00:57:02.679518 | orchestrator | Saturday 07 March 2026 00:53:31 +0000 (0:00:01.479) 0:04:09.563 ******** 2026-03-07 00:57:02.679523 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:57:02.679528 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:57:02.679534 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:57:02.679539 | orchestrator | 2026-03-07 00:57:02.679544 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-03-07 00:57:02.679550 | orchestrator | Saturday 07 March 2026 00:53:34 +0000 (0:00:02.367) 0:04:11.931 ******** 2026-03-07 00:57:02.679555 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:57:02.679561 | orchestrator | 2026-03-07 00:57:02.679566 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-03-07 00:57:02.679571 | orchestrator | Saturday 07 March 2026 00:53:35 +0000 (0:00:01.662) 0:04:13.593 ******** 2026-03-07 00:57:02.679577 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-07 00:57:02.679583 | orchestrator | 2026-03-07 00:57:02.679588 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-03-07 00:57:02.679594 | orchestrator | Saturday 07 March 2026 00:53:38 +0000 (0:00:03.163) 0:04:16.757 ******** 2026-03-07 00:57:02.679604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-07 00:57:02.679620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-07 00:57:02.679626 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.679632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-07 00:57:02.679638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-07 00:57:02.679649 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.679663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-07 00:57:02.679669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-07 00:57:02.679675 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.679680 | orchestrator | 2026-03-07 00:57:02.679687 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-03-07 00:57:02.679696 | orchestrator | Saturday 07 March 2026 00:53:40 +0000 (0:00:02.073) 0:04:18.830 ******** 2026-03-07 00:57:02.679710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-07 00:57:02.679731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-07 00:57:02.679740 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.679749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-07 00:57:02.679758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-07 00:57:02.679791 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.679813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-07 00:57:02.679824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-07 00:57:02.679833 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.679841 | orchestrator | 2026-03-07 00:57:02.679849 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-03-07 00:57:02.679858 | orchestrator | Saturday 07 March 2026 00:53:43 +0000 (0:00:02.223) 0:04:21.054 ******** 2026-03-07 00:57:02.679867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-07 00:57:02.679877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-07 00:57:02.679892 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.679902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-07 00:57:02.679917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-07 00:57:02.679927 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.679938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-07 00:57:02.679944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-07 00:57:02.679949 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.679955 | orchestrator | 2026-03-07 00:57:02.679960 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-03-07 00:57:02.679966 | orchestrator | Saturday 07 March 2026 00:53:45 +0000 (0:00:02.669) 0:04:23.723 ******** 2026-03-07 00:57:02.679971 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:57:02.679976 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:57:02.679982 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:57:02.679987 | orchestrator | 2026-03-07 00:57:02.679992 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-03-07 00:57:02.679998 | orchestrator | Saturday 07 March 2026 00:53:47 +0000 (0:00:01.926) 0:04:25.649 ******** 2026-03-07 00:57:02.680003 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.680009 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.680014 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.680019 | orchestrator | 2026-03-07 00:57:02.680025 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-03-07 00:57:02.680030 | orchestrator | Saturday 07 March 2026 00:53:49 +0000 (0:00:01.357) 0:04:27.007 ******** 2026-03-07 00:57:02.680035 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.680045 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.680050 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.680056 | orchestrator | 2026-03-07 00:57:02.680061 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-03-07 00:57:02.680066 | orchestrator | Saturday 07 March 2026 00:53:49 +0000 (0:00:00.306) 0:04:27.313 ******** 2026-03-07 00:57:02.680072 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:57:02.680077 | orchestrator | 2026-03-07 00:57:02.680082 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-03-07 00:57:02.680087 | orchestrator | Saturday 07 March 2026 00:53:50 +0000 (0:00:01.287) 0:04:28.601 ******** 2026-03-07 00:57:02.680093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-07 00:57:02.680104 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-07 00:57:02.680113 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-07 00:57:02.680119 | orchestrator | 2026-03-07 00:57:02.680124 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-03-07 00:57:02.680129 | orchestrator | Saturday 07 March 2026 00:53:52 +0000 (0:00:01.606) 0:04:30.207 ******** 2026-03-07 00:57:02.680135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-07 00:57:02.680146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-07 00:57:02.680151 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.680157 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.680163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-07 00:57:02.680168 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.680174 | orchestrator | 2026-03-07 00:57:02.680179 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-03-07 00:57:02.680185 | orchestrator | Saturday 07 March 2026 00:53:52 +0000 (0:00:00.383) 0:04:30.590 ******** 2026-03-07 00:57:02.680190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-07 00:57:02.680196 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.680205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-07 00:57:02.680211 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.680216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-07 00:57:02.680222 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.680227 | orchestrator | 2026-03-07 00:57:02.680233 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-03-07 00:57:02.680238 | orchestrator | Saturday 07 March 2026 00:53:53 +0000 (0:00:00.739) 0:04:31.330 ******** 2026-03-07 00:57:02.680247 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.680252 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.680258 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.680263 | orchestrator | 2026-03-07 00:57:02.680269 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-03-07 00:57:02.680274 | orchestrator | Saturday 07 March 2026 00:53:53 +0000 (0:00:00.453) 0:04:31.783 ******** 2026-03-07 00:57:02.680279 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.680285 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.680295 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.680300 | orchestrator | 2026-03-07 00:57:02.680306 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-03-07 00:57:02.680311 | orchestrator | Saturday 07 March 2026 00:53:55 +0000 (0:00:01.299) 0:04:33.083 ******** 2026-03-07 00:57:02.680316 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.680322 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.680327 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.680333 | orchestrator | 2026-03-07 00:57:02.680338 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-03-07 00:57:02.680344 | orchestrator | Saturday 07 March 2026 00:53:55 +0000 (0:00:00.348) 0:04:33.431 ******** 2026-03-07 00:57:02.680349 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:57:02.680355 | orchestrator | 2026-03-07 00:57:02.680360 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-03-07 00:57:02.680365 | orchestrator | Saturday 07 March 2026 00:53:57 +0000 (0:00:01.695) 0:04:35.127 ******** 2026-03-07 00:57:02.680371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-07 00:57:02.680377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.680534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.680551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.680565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-07 00:57:02.680571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-07 00:57:02.680577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.680624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.680633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-07 00:57:02.680649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.680655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-07 00:57:02.680661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.680667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.680673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-07 00:57:02.680722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-07 00:57:02.680737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.680746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.680755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-07 00:57:02.680765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-07 00:57:02.680790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-07 00:57:02.680800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-07 00:57:02.680853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.680880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.680889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-07 00:57:02.680898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-07 00:57:02.680908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-07 00:57:02.680971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.681037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-07 00:57:02.681062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-07 00:57:02.681073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.681082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-07 00:57:02.681092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-07 00:57:02.681158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-07 00:57:02.681178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.681184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.681191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.681197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-07 00:57:02.681228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.681265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-07 00:57:02.681275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-07 00:57:02.681281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.681287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-07 00:57:02.681293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.681299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-07 00:57:02.681347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-07 00:57:02.681359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.681365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-07 00:57:02.681372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-07 00:57:02.681377 | orchestrator | 2026-03-07 00:57:02.681383 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-03-07 00:57:02.681389 | orchestrator | Saturday 07 March 2026 00:54:01 +0000 (0:00:04.761) 0:04:39.888 ******** 2026-03-07 00:57:02.681395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-07 00:57:02.681452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.681464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.681470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.681476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-07 00:57:02.681483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.681493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-07 00:57:02.681534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-07 00:57:02.681542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.681549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-07 00:57:02.681555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.681561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-07 00:57:02.681616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-07 00:57:02.681643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.681716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-07 00:57:02.681731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-07 00:57:02.681741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-07 00:57:02.681752 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.681758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.681833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.681893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.681906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-07 00:57:02.681912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-07 00:57:02.681919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.681931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.681975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.681987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.681994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-07 00:57:02.682000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-07 00:57:02.682006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-07 00:57:02.682037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.682084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.682093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-07 00:57:02.682103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-07 00:57:02.682108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-07 00:57:02.682114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.682124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.682130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-07 00:57:02.682189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-07 00:57:02.682202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-07 00:57:02.682208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.682215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.682225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-07 00:57:02.682231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-07 00:57:02.682255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-07 00:57:02.682264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-07 00:57:02.682271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.682276 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.682282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-07 00:57:02.682292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-07 00:57:02.682298 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.682304 | orchestrator | 2026-03-07 00:57:02.682309 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-03-07 00:57:02.682316 | orchestrator | Saturday 07 March 2026 00:54:03 +0000 (0:00:01.768) 0:04:41.657 ******** 2026-03-07 00:57:02.682322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-07 00:57:02.682328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-07 00:57:02.682343 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.682363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-07 00:57:02.682369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-07 00:57:02.682374 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.682379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-07 00:57:02.682384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-07 00:57:02.682389 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.682394 | orchestrator | 2026-03-07 00:57:02.682404 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-03-07 00:57:02.682409 | orchestrator | Saturday 07 March 2026 00:54:06 +0000 (0:00:02.352) 0:04:44.009 ******** 2026-03-07 00:57:02.682414 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:57:02.682419 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:57:02.682424 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:57:02.682429 | orchestrator | 2026-03-07 00:57:02.682433 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-03-07 00:57:02.682438 | orchestrator | Saturday 07 March 2026 00:54:07 +0000 (0:00:01.398) 0:04:45.408 ******** 2026-03-07 00:57:02.682443 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:57:02.682452 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:57:02.682457 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:57:02.682461 | orchestrator | 2026-03-07 00:57:02.682466 | orchestrator | TASK [include_role : placement] ************************************************ 2026-03-07 00:57:02.682471 | orchestrator | Saturday 07 March 2026 00:54:09 +0000 (0:00:02.073) 0:04:47.481 ******** 2026-03-07 00:57:02.682476 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:57:02.682481 | orchestrator | 2026-03-07 00:57:02.682486 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-03-07 00:57:02.682491 | orchestrator | Saturday 07 March 2026 00:54:10 +0000 (0:00:01.215) 0:04:48.696 ******** 2026-03-07 00:57:02.682496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-07 00:57:02.682501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-07 00:57:02.682522 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-07 00:57:02.682528 | orchestrator | 2026-03-07 00:57:02.682533 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-03-07 00:57:02.682538 | orchestrator | Saturday 07 March 2026 00:54:14 +0000 (0:00:03.449) 0:04:52.146 ******** 2026-03-07 00:57:02.682546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-07 00:57:02.682555 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.682560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-07 00:57:02.682565 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.682572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-07 00:57:02.682581 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.682590 | orchestrator | 2026-03-07 00:57:02.682601 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-03-07 00:57:02.682608 | orchestrator | Saturday 07 March 2026 00:54:14 +0000 (0:00:00.487) 0:04:52.634 ******** 2026-03-07 00:57:02.682616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-07 00:57:02.682625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-07 00:57:02.682633 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.682665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-07 00:57:02.682674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-07 00:57:02.682687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-07 00:57:02.682692 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.682701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-07 00:57:02.682706 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.682712 | orchestrator | 2026-03-07 00:57:02.682721 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-03-07 00:57:02.682728 | orchestrator | Saturday 07 March 2026 00:54:15 +0000 (0:00:00.729) 0:04:53.364 ******** 2026-03-07 00:57:02.682735 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:57:02.682742 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:57:02.682749 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:57:02.682756 | orchestrator | 2026-03-07 00:57:02.682763 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-03-07 00:57:02.682793 | orchestrator | Saturday 07 March 2026 00:54:17 +0000 (0:00:01.852) 0:04:55.216 ******** 2026-03-07 00:57:02.682800 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:57:02.682806 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:57:02.682813 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:57:02.682821 | orchestrator | 2026-03-07 00:57:02.682829 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-03-07 00:57:02.682836 | orchestrator | Saturday 07 March 2026 00:54:19 +0000 (0:00:01.934) 0:04:57.151 ******** 2026-03-07 00:57:02.682843 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:57:02.682850 | orchestrator | 2026-03-07 00:57:02.682858 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-03-07 00:57:02.682866 | orchestrator | Saturday 07 March 2026 00:54:20 +0000 (0:00:01.586) 0:04:58.737 ******** 2026-03-07 00:57:02.682875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-07 00:57:02.682886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.682930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.682946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-07 00:57:02.682956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.682964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.682972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-07 00:57:02.683009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.683022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.683031 | orchestrator | 2026-03-07 00:57:02.683039 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-03-07 00:57:02.683046 | orchestrator | Saturday 07 March 2026 00:54:25 +0000 (0:00:04.404) 0:05:03.141 ******** 2026-03-07 00:57:02.683055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-07 00:57:02.683063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.683072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.683086 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.683121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-07 00:57:02.683132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.683141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.683149 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.683157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-07 00:57:02.683170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.683202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.683212 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.683220 | orchestrator | 2026-03-07 00:57:02.683228 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-03-07 00:57:02.683235 | orchestrator | Saturday 07 March 2026 00:54:26 +0000 (0:00:01.121) 0:05:04.263 ******** 2026-03-07 00:57:02.683248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-07 00:57:02.683259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-07 00:57:02.683268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-07 00:57:02.683276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-07 00:57:02.683284 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.683292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-07 00:57:02.683300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-07 00:57:02.683308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-07 00:57:02.683316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-07 00:57:02.683324 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.683333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-07 00:57:02.683348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-07 00:57:02.683356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-07 00:57:02.683363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-07 00:57:02.683368 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.683373 | orchestrator | 2026-03-07 00:57:02.683378 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-03-07 00:57:02.683383 | orchestrator | Saturday 07 March 2026 00:54:27 +0000 (0:00:00.930) 0:05:05.194 ******** 2026-03-07 00:57:02.683388 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:57:02.683393 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:57:02.683398 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:57:02.683403 | orchestrator | 2026-03-07 00:57:02.683407 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-03-07 00:57:02.683412 | orchestrator | Saturday 07 March 2026 00:54:28 +0000 (0:00:01.325) 0:05:06.520 ******** 2026-03-07 00:57:02.683417 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:57:02.683422 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:57:02.683428 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:57:02.683436 | orchestrator | 2026-03-07 00:57:02.683470 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-03-07 00:57:02.683480 | orchestrator | Saturday 07 March 2026 00:54:30 +0000 (0:00:02.109) 0:05:08.629 ******** 2026-03-07 00:57:02.683488 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:57:02.683496 | orchestrator | 2026-03-07 00:57:02.683504 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-03-07 00:57:02.683512 | orchestrator | Saturday 07 March 2026 00:54:32 +0000 (0:00:01.465) 0:05:10.094 ******** 2026-03-07 00:57:02.683520 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-03-07 00:57:02.683529 | orchestrator | 2026-03-07 00:57:02.683537 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-03-07 00:57:02.683546 | orchestrator | Saturday 07 March 2026 00:54:33 +0000 (0:00:01.007) 0:05:11.102 ******** 2026-03-07 00:57:02.683564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-07 00:57:02.683574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-07 00:57:02.683582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-07 00:57:02.683597 | orchestrator | 2026-03-07 00:57:02.683605 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-03-07 00:57:02.683614 | orchestrator | Saturday 07 March 2026 00:54:38 +0000 (0:00:05.267) 0:05:16.370 ******** 2026-03-07 00:57:02.683623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-07 00:57:02.683631 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.683639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-07 00:57:02.683648 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.683657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-07 00:57:02.683665 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.683673 | orchestrator | 2026-03-07 00:57:02.683710 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-03-07 00:57:02.683723 | orchestrator | Saturday 07 March 2026 00:54:39 +0000 (0:00:01.248) 0:05:17.618 ******** 2026-03-07 00:57:02.683731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-07 00:57:02.683740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-07 00:57:02.683749 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.683761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-07 00:57:02.683789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-07 00:57:02.683797 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.683804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-07 00:57:02.683818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-07 00:57:02.683825 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.683833 | orchestrator | 2026-03-07 00:57:02.683840 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-07 00:57:02.683848 | orchestrator | Saturday 07 March 2026 00:54:41 +0000 (0:00:01.462) 0:05:19.081 ******** 2026-03-07 00:57:02.683856 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:57:02.683863 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:57:02.683871 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:57:02.683878 | orchestrator | 2026-03-07 00:57:02.683885 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-07 00:57:02.683893 | orchestrator | Saturday 07 March 2026 00:54:43 +0000 (0:00:02.351) 0:05:21.432 ******** 2026-03-07 00:57:02.683900 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:57:02.683908 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:57:02.683915 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:57:02.683922 | orchestrator | 2026-03-07 00:57:02.683930 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-03-07 00:57:02.683936 | orchestrator | Saturday 07 March 2026 00:54:46 +0000 (0:00:02.830) 0:05:24.263 ******** 2026-03-07 00:57:02.683944 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-03-07 00:57:02.683951 | orchestrator | 2026-03-07 00:57:02.683959 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-03-07 00:57:02.683968 | orchestrator | Saturday 07 March 2026 00:54:47 +0000 (0:00:01.288) 0:05:25.551 ******** 2026-03-07 00:57:02.683975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-07 00:57:02.683982 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.683989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-07 00:57:02.683997 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.684042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-07 00:57:02.684052 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.684061 | orchestrator | 2026-03-07 00:57:02.684069 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-03-07 00:57:02.684083 | orchestrator | Saturday 07 March 2026 00:54:48 +0000 (0:00:01.293) 0:05:26.845 ******** 2026-03-07 00:57:02.684097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-07 00:57:02.684105 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.684113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-07 00:57:02.684121 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.684129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-07 00:57:02.684137 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.684144 | orchestrator | 2026-03-07 00:57:02.684152 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-03-07 00:57:02.684160 | orchestrator | Saturday 07 March 2026 00:54:50 +0000 (0:00:01.247) 0:05:28.093 ******** 2026-03-07 00:57:02.684168 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.684175 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.684183 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.684191 | orchestrator | 2026-03-07 00:57:02.684198 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-07 00:57:02.684205 | orchestrator | Saturday 07 March 2026 00:54:51 +0000 (0:00:01.776) 0:05:29.870 ******** 2026-03-07 00:57:02.684212 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:57:02.684221 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:57:02.684228 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:57:02.684236 | orchestrator | 2026-03-07 00:57:02.684244 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-07 00:57:02.684252 | orchestrator | Saturday 07 March 2026 00:54:54 +0000 (0:00:02.464) 0:05:32.335 ******** 2026-03-07 00:57:02.684260 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:57:02.684267 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:57:02.684276 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:57:02.684283 | orchestrator | 2026-03-07 00:57:02.684291 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-03-07 00:57:02.684298 | orchestrator | Saturday 07 March 2026 00:54:58 +0000 (0:00:03.632) 0:05:35.967 ******** 2026-03-07 00:57:02.684306 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-03-07 00:57:02.684314 | orchestrator | 2026-03-07 00:57:02.684321 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-03-07 00:57:02.684328 | orchestrator | Saturday 07 March 2026 00:54:59 +0000 (0:00:01.019) 0:05:36.987 ******** 2026-03-07 00:57:02.684365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-07 00:57:02.684379 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.684388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-07 00:57:02.684396 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.684409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-07 00:57:02.684418 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.684427 | orchestrator | 2026-03-07 00:57:02.684435 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-03-07 00:57:02.684444 | orchestrator | Saturday 07 March 2026 00:55:00 +0000 (0:00:01.620) 0:05:38.607 ******** 2026-03-07 00:57:02.684453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-07 00:57:02.684462 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.684469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-07 00:57:02.684478 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.684485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-07 00:57:02.684492 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.684499 | orchestrator | 2026-03-07 00:57:02.684507 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-03-07 00:57:02.684522 | orchestrator | Saturday 07 March 2026 00:55:02 +0000 (0:00:01.607) 0:05:40.215 ******** 2026-03-07 00:57:02.684531 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.684539 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.684548 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.684557 | orchestrator | 2026-03-07 00:57:02.684565 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-07 00:57:02.684574 | orchestrator | Saturday 07 March 2026 00:55:04 +0000 (0:00:01.833) 0:05:42.048 ******** 2026-03-07 00:57:02.684582 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:57:02.684591 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:57:02.684600 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:57:02.684609 | orchestrator | 2026-03-07 00:57:02.684617 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-07 00:57:02.684625 | orchestrator | Saturday 07 March 2026 00:55:06 +0000 (0:00:02.705) 0:05:44.754 ******** 2026-03-07 00:57:02.684634 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:57:02.684642 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:57:02.684651 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:57:02.684659 | orchestrator | 2026-03-07 00:57:02.684668 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-03-07 00:57:02.684677 | orchestrator | Saturday 07 March 2026 00:55:10 +0000 (0:00:03.880) 0:05:48.634 ******** 2026-03-07 00:57:02.684710 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:57:02.684720 | orchestrator | 2026-03-07 00:57:02.684727 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-03-07 00:57:02.684735 | orchestrator | Saturday 07 March 2026 00:55:12 +0000 (0:00:01.890) 0:05:50.525 ******** 2026-03-07 00:57:02.684750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-07 00:57:02.684761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-07 00:57:02.684788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-07 00:57:02.684804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-07 00:57:02.684813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-07 00:57:02.684849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-07 00:57:02.684864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-07 00:57:02.684873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.684882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-07 00:57:02.684897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.684906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-07 00:57:02.684937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-07 00:57:02.684952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-07 00:57:02.684961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-07 00:57:02.684969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.684982 | orchestrator | 2026-03-07 00:57:02.684991 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-03-07 00:57:02.684999 | orchestrator | Saturday 07 March 2026 00:55:16 +0000 (0:00:04.039) 0:05:54.564 ******** 2026-03-07 00:57:02.685008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-07 00:57:02.685016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-07 00:57:02.685048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-07 00:57:02.685065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-07 00:57:02.685074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.685082 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.685090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-07 00:57:02.685104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-07 00:57:02.685113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-07 00:57:02.685143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-07 00:57:02.685151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.685159 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.685172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-07 00:57:02.685186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-07 00:57:02.685195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-07 00:57:02.685204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-07 00:57:02.685234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-07 00:57:02.685245 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.685253 | orchestrator | 2026-03-07 00:57:02.685261 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-03-07 00:57:02.685270 | orchestrator | Saturday 07 March 2026 00:55:17 +0000 (0:00:00.804) 0:05:55.368 ******** 2026-03-07 00:57:02.685278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-07 00:57:02.685287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-07 00:57:02.685296 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.685309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-07 00:57:02.685317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-07 00:57:02.685331 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.685339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-07 00:57:02.685347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-07 00:57:02.685358 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.685367 | orchestrator | 2026-03-07 00:57:02.685375 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-03-07 00:57:02.685383 | orchestrator | Saturday 07 March 2026 00:55:19 +0000 (0:00:01.948) 0:05:57.317 ******** 2026-03-07 00:57:02.685391 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:57:02.685399 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:57:02.685407 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:57:02.685415 | orchestrator | 2026-03-07 00:57:02.685424 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-03-07 00:57:02.685432 | orchestrator | Saturday 07 March 2026 00:55:20 +0000 (0:00:01.528) 0:05:58.845 ******** 2026-03-07 00:57:02.685440 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:57:02.685448 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:57:02.685456 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:57:02.685464 | orchestrator | 2026-03-07 00:57:02.685472 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-03-07 00:57:02.685479 | orchestrator | Saturday 07 March 2026 00:55:23 +0000 (0:00:02.412) 0:06:01.257 ******** 2026-03-07 00:57:02.685486 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:57:02.685494 | orchestrator | 2026-03-07 00:57:02.685503 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-03-07 00:57:02.685510 | orchestrator | Saturday 07 March 2026 00:55:24 +0000 (0:00:01.408) 0:06:02.666 ******** 2026-03-07 00:57:02.685519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-07 00:57:02.685554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-07 00:57:02.685570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-07 00:57:02.685590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-07 00:57:02.685600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-07 00:57:02.685632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-07 00:57:02.685642 | orchestrator | 2026-03-07 00:57:02.685650 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-03-07 00:57:02.685665 | orchestrator | Saturday 07 March 2026 00:55:30 +0000 (0:00:06.104) 0:06:08.771 ******** 2026-03-07 00:57:02.685678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-07 00:57:02.685687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-07 00:57:02.685696 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.685704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-07 00:57:02.685737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-07 00:57:02.685753 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.685820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-07 00:57:02.685832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-07 00:57:02.685840 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.685848 | orchestrator | 2026-03-07 00:57:02.685856 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-03-07 00:57:02.685864 | orchestrator | Saturday 07 March 2026 00:55:31 +0000 (0:00:00.745) 0:06:09.516 ******** 2026-03-07 00:57:02.685873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-07 00:57:02.685882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-07 00:57:02.685891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-07 00:57:02.685904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-07 00:57:02.685913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-07 00:57:02.685922 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.685930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-07 00:57:02.685938 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.685953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-07 00:57:02.685992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-07 00:57:02.686001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-07 00:57:02.686009 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.686043 | orchestrator | 2026-03-07 00:57:02.686053 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-03-07 00:57:02.686061 | orchestrator | Saturday 07 March 2026 00:55:32 +0000 (0:00:01.055) 0:06:10.572 ******** 2026-03-07 00:57:02.686068 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.686076 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.686083 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.686091 | orchestrator | 2026-03-07 00:57:02.686098 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-03-07 00:57:02.686111 | orchestrator | Saturday 07 March 2026 00:55:33 +0000 (0:00:00.945) 0:06:11.517 ******** 2026-03-07 00:57:02.686119 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.686127 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.686135 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.686143 | orchestrator | 2026-03-07 00:57:02.686151 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-03-07 00:57:02.686158 | orchestrator | Saturday 07 March 2026 00:55:35 +0000 (0:00:01.619) 0:06:13.137 ******** 2026-03-07 00:57:02.686166 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:57:02.686174 | orchestrator | 2026-03-07 00:57:02.686181 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-03-07 00:57:02.686189 | orchestrator | Saturday 07 March 2026 00:55:36 +0000 (0:00:01.624) 0:06:14.761 ******** 2026-03-07 00:57:02.686197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-07 00:57:02.686205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-07 00:57:02.686213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:57:02.686226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:57:02.686263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-07 00:57:02.686280 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-07 00:57:02.686289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-07 00:57:02.686297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:57:02.686305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:57:02.686313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-07 00:57:02.686330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-07 00:57:02.686360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-07 00:57:02.686370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:57:02.686378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:57:02.686387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-07 00:57:02.686395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-07 00:57:02.686412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-07 00:57:02.686479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:57:02.686497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:57:02.686508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-07 00:57:02.686516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-07 00:57:02.686525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-07 00:57:02.686538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:57:02.686554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:57:02.686563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-07 00:57:02.686571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-07 00:57:02.686576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-07 00:57:02.686585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:57:02.686590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:57:02.686595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-07 00:57:02.686600 | orchestrator | 2026-03-07 00:57:02.686608 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-03-07 00:57:02.686613 | orchestrator | Saturday 07 March 2026 00:55:42 +0000 (0:00:05.307) 0:06:20.068 ******** 2026-03-07 00:57:02.686622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-07 00:57:02.686631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-07 00:57:02.686639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:57:02.686647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:57:02.686659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-07 00:57:02.686668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-07 00:57:02.686685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-07 00:57:02.686694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:57:02.686702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:57:02.686714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-07 00:57:02.686722 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.686727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-07 00:57:02.686732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-07 00:57:02.686745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:57:02.686753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:57:02.686765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-07 00:57:02.686791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-07 00:57:02.686806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-07 00:57:02.686815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-07 00:57:02.686826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-07 00:57:02.686837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:57:02.686846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:57:02.686854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:57:02.686868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:57:02.686876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-07 00:57:02.686884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-07 00:57:02.686897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-07 00:57:02.686906 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.686917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-07 00:57:02.686933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:57:02.686941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 00:57:02.686949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-07 00:57:02.686957 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.686964 | orchestrator | 2026-03-07 00:57:02.686972 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-03-07 00:57:02.686980 | orchestrator | Saturday 07 March 2026 00:55:43 +0000 (0:00:01.461) 0:06:21.530 ******** 2026-03-07 00:57:02.686988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-07 00:57:02.686998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-07 00:57:02.687006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-07 00:57:02.687014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-07 00:57:02.687028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-07 00:57:02.687037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-07 00:57:02.687044 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.687052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-07 00:57:02.687063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-07 00:57:02.687077 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.687085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-07 00:57:02.687093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-07 00:57:02.687101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-07 00:57:02.687109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-07 00:57:02.687117 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.687125 | orchestrator | 2026-03-07 00:57:02.687133 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-03-07 00:57:02.687141 | orchestrator | Saturday 07 March 2026 00:55:44 +0000 (0:00:01.142) 0:06:22.672 ******** 2026-03-07 00:57:02.687148 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.687156 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.687164 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.687171 | orchestrator | 2026-03-07 00:57:02.687179 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-03-07 00:57:02.687187 | orchestrator | Saturday 07 March 2026 00:55:45 +0000 (0:00:00.536) 0:06:23.209 ******** 2026-03-07 00:57:02.687195 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.687203 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.687211 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.687218 | orchestrator | 2026-03-07 00:57:02.687226 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-03-07 00:57:02.687233 | orchestrator | Saturday 07 March 2026 00:55:47 +0000 (0:00:01.690) 0:06:24.899 ******** 2026-03-07 00:57:02.687241 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:57:02.687248 | orchestrator | 2026-03-07 00:57:02.687256 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-03-07 00:57:02.687264 | orchestrator | Saturday 07 March 2026 00:55:49 +0000 (0:00:02.119) 0:06:27.018 ******** 2026-03-07 00:57:02.687276 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-07 00:57:02.687288 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-07 00:57:02.687302 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-07 00:57:02.687308 | orchestrator | 2026-03-07 00:57:02.687312 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-03-07 00:57:02.687317 | orchestrator | Saturday 07 March 2026 00:55:52 +0000 (0:00:03.261) 0:06:30.280 ******** 2026-03-07 00:57:02.687322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-07 00:57:02.687327 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.687335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-07 00:57:02.687343 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.687354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-07 00:57:02.687362 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.687370 | orchestrator | 2026-03-07 00:57:02.687377 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-03-07 00:57:02.687383 | orchestrator | Saturday 07 March 2026 00:55:52 +0000 (0:00:00.459) 0:06:30.740 ******** 2026-03-07 00:57:02.687391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-07 00:57:02.687399 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.687407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-07 00:57:02.687414 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.687420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-07 00:57:02.687428 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.687435 | orchestrator | 2026-03-07 00:57:02.687442 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-03-07 00:57:02.687449 | orchestrator | Saturday 07 March 2026 00:55:54 +0000 (0:00:01.176) 0:06:31.917 ******** 2026-03-07 00:57:02.687456 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.687463 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.687470 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.687477 | orchestrator | 2026-03-07 00:57:02.687484 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-03-07 00:57:02.687491 | orchestrator | Saturday 07 March 2026 00:55:54 +0000 (0:00:00.480) 0:06:32.397 ******** 2026-03-07 00:57:02.687498 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.687504 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.687511 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.687519 | orchestrator | 2026-03-07 00:57:02.687526 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-03-07 00:57:02.687534 | orchestrator | Saturday 07 March 2026 00:55:56 +0000 (0:00:01.695) 0:06:34.093 ******** 2026-03-07 00:57:02.687541 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:57:02.687548 | orchestrator | 2026-03-07 00:57:02.687554 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-03-07 00:57:02.687561 | orchestrator | Saturday 07 March 2026 00:55:58 +0000 (0:00:01.977) 0:06:36.071 ******** 2026-03-07 00:57:02.687568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-07 00:57:02.687590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-07 00:57:02.687604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-07 00:57:02.687613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-07 00:57:02.687621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-07 00:57:02.687639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-07 00:57:02.687648 | orchestrator | 2026-03-07 00:57:02.687655 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-03-07 00:57:02.687662 | orchestrator | Saturday 07 March 2026 00:56:05 +0000 (0:00:07.126) 0:06:43.197 ******** 2026-03-07 00:57:02.687673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-07 00:57:02.687681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-07 00:57:02.687689 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.687697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-07 00:57:02.687716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-07 00:57:02.687724 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.687736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-07 00:57:02.687744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-07 00:57:02.687753 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.687761 | orchestrator | 2026-03-07 00:57:02.687787 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-03-07 00:57:02.687796 | orchestrator | Saturday 07 March 2026 00:56:06 +0000 (0:00:00.723) 0:06:43.920 ******** 2026-03-07 00:57:02.687803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-07 00:57:02.687818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-07 00:57:02.687826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-07 00:57:02.687834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-07 00:57:02.687843 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.687850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-07 00:57:02.687858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-07 00:57:02.687866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-07 00:57:02.687878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-07 00:57:02.687886 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.687893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-07 00:57:02.687900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-07 00:57:02.687908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-07 00:57:02.687921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-07 00:57:02.687929 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.687936 | orchestrator | 2026-03-07 00:57:02.687944 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-03-07 00:57:02.687952 | orchestrator | Saturday 07 March 2026 00:56:08 +0000 (0:00:02.086) 0:06:46.007 ******** 2026-03-07 00:57:02.687959 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:57:02.687967 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:57:02.687974 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:57:02.687982 | orchestrator | 2026-03-07 00:57:02.687989 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-03-07 00:57:02.687997 | orchestrator | Saturday 07 March 2026 00:56:09 +0000 (0:00:01.531) 0:06:47.539 ******** 2026-03-07 00:57:02.688004 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:57:02.688012 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:57:02.688019 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:57:02.688037 | orchestrator | 2026-03-07 00:57:02.688044 | orchestrator | TASK [include_role : swift] **************************************************** 2026-03-07 00:57:02.688052 | orchestrator | Saturday 07 March 2026 00:56:11 +0000 (0:00:02.097) 0:06:49.636 ******** 2026-03-07 00:57:02.688060 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.688067 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.688075 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.688083 | orchestrator | 2026-03-07 00:57:02.688091 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-03-07 00:57:02.688098 | orchestrator | Saturday 07 March 2026 00:56:12 +0000 (0:00:00.305) 0:06:49.941 ******** 2026-03-07 00:57:02.688106 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.688114 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.688121 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.688129 | orchestrator | 2026-03-07 00:57:02.688137 | orchestrator | TASK [include_role : trove] **************************************************** 2026-03-07 00:57:02.688144 | orchestrator | Saturday 07 March 2026 00:56:12 +0000 (0:00:00.299) 0:06:50.241 ******** 2026-03-07 00:57:02.688152 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.688159 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.688167 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.688174 | orchestrator | 2026-03-07 00:57:02.688182 | orchestrator | TASK [include_role : venus] **************************************************** 2026-03-07 00:57:02.688190 | orchestrator | Saturday 07 March 2026 00:56:12 +0000 (0:00:00.621) 0:06:50.862 ******** 2026-03-07 00:57:02.688198 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.688206 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.688213 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.688221 | orchestrator | 2026-03-07 00:57:02.688228 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-03-07 00:57:02.688236 | orchestrator | Saturday 07 March 2026 00:56:13 +0000 (0:00:00.318) 0:06:51.180 ******** 2026-03-07 00:57:02.688243 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.688250 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.688257 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.688265 | orchestrator | 2026-03-07 00:57:02.688273 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-03-07 00:57:02.688281 | orchestrator | Saturday 07 March 2026 00:56:13 +0000 (0:00:00.324) 0:06:51.505 ******** 2026-03-07 00:57:02.688288 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.688296 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.688303 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.688311 | orchestrator | 2026-03-07 00:57:02.688318 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-03-07 00:57:02.688326 | orchestrator | Saturday 07 March 2026 00:56:14 +0000 (0:00:00.761) 0:06:52.266 ******** 2026-03-07 00:57:02.688333 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:57:02.688341 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:57:02.688348 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:57:02.688356 | orchestrator | 2026-03-07 00:57:02.688364 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-03-07 00:57:02.688372 | orchestrator | Saturday 07 March 2026 00:56:15 +0000 (0:00:00.729) 0:06:52.996 ******** 2026-03-07 00:57:02.688380 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:57:02.688388 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:57:02.688395 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:57:02.688403 | orchestrator | 2026-03-07 00:57:02.688410 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-03-07 00:57:02.688418 | orchestrator | Saturday 07 March 2026 00:56:15 +0000 (0:00:00.354) 0:06:53.351 ******** 2026-03-07 00:57:02.688426 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:57:02.688433 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:57:02.688441 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:57:02.688449 | orchestrator | 2026-03-07 00:57:02.688461 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-03-07 00:57:02.688476 | orchestrator | Saturday 07 March 2026 00:56:16 +0000 (0:00:00.963) 0:06:54.315 ******** 2026-03-07 00:57:02.688484 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:57:02.688492 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:57:02.688499 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:57:02.688507 | orchestrator | 2026-03-07 00:57:02.688514 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-03-07 00:57:02.688522 | orchestrator | Saturday 07 March 2026 00:56:17 +0000 (0:00:01.228) 0:06:55.543 ******** 2026-03-07 00:57:02.688529 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:57:02.688537 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:57:02.688545 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:57:02.688552 | orchestrator | 2026-03-07 00:57:02.688560 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-03-07 00:57:02.688567 | orchestrator | Saturday 07 March 2026 00:56:18 +0000 (0:00:00.898) 0:06:56.441 ******** 2026-03-07 00:57:02.688575 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:57:02.688583 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:57:02.688590 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:57:02.688597 | orchestrator | 2026-03-07 00:57:02.688609 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-03-07 00:57:02.688617 | orchestrator | Saturday 07 March 2026 00:56:28 +0000 (0:00:09.759) 0:07:06.201 ******** 2026-03-07 00:57:02.688625 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:57:02.688632 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:57:02.688640 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:57:02.688647 | orchestrator | 2026-03-07 00:57:02.688655 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-03-07 00:57:02.688662 | orchestrator | Saturday 07 March 2026 00:56:29 +0000 (0:00:00.788) 0:07:06.990 ******** 2026-03-07 00:57:02.688670 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:57:02.688678 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:57:02.688685 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:57:02.688693 | orchestrator | 2026-03-07 00:57:02.688701 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-03-07 00:57:02.688709 | orchestrator | Saturday 07 March 2026 00:56:43 +0000 (0:00:14.165) 0:07:21.155 ******** 2026-03-07 00:57:02.688717 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:57:02.688724 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:57:02.688732 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:57:02.688739 | orchestrator | 2026-03-07 00:57:02.688747 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-03-07 00:57:02.688754 | orchestrator | Saturday 07 March 2026 00:56:44 +0000 (0:00:01.283) 0:07:22.439 ******** 2026-03-07 00:57:02.688762 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:57:02.688815 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:57:02.688823 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:57:02.688831 | orchestrator | 2026-03-07 00:57:02.688839 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-03-07 00:57:02.688847 | orchestrator | Saturday 07 March 2026 00:56:54 +0000 (0:00:10.017) 0:07:32.456 ******** 2026-03-07 00:57:02.688854 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.688862 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.688869 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.688877 | orchestrator | 2026-03-07 00:57:02.688885 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-03-07 00:57:02.688892 | orchestrator | Saturday 07 March 2026 00:56:54 +0000 (0:00:00.343) 0:07:32.800 ******** 2026-03-07 00:57:02.688900 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.688908 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.688915 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.688923 | orchestrator | 2026-03-07 00:57:02.688931 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-03-07 00:57:02.688939 | orchestrator | Saturday 07 March 2026 00:56:55 +0000 (0:00:00.362) 0:07:33.162 ******** 2026-03-07 00:57:02.688953 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.688960 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.688968 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.688976 | orchestrator | 2026-03-07 00:57:02.688983 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-03-07 00:57:02.688990 | orchestrator | Saturday 07 March 2026 00:56:55 +0000 (0:00:00.697) 0:07:33.860 ******** 2026-03-07 00:57:02.688997 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.689004 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.689013 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.689021 | orchestrator | 2026-03-07 00:57:02.689028 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-03-07 00:57:02.689036 | orchestrator | Saturday 07 March 2026 00:56:56 +0000 (0:00:00.333) 0:07:34.193 ******** 2026-03-07 00:57:02.689043 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.689051 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.689059 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.689067 | orchestrator | 2026-03-07 00:57:02.689074 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-03-07 00:57:02.689082 | orchestrator | Saturday 07 March 2026 00:56:56 +0000 (0:00:00.378) 0:07:34.571 ******** 2026-03-07 00:57:02.689090 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:57:02.689098 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:57:02.689105 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:57:02.689113 | orchestrator | 2026-03-07 00:57:02.689121 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-03-07 00:57:02.689128 | orchestrator | Saturday 07 March 2026 00:56:56 +0000 (0:00:00.305) 0:07:34.877 ******** 2026-03-07 00:57:02.689136 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:57:02.689144 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:57:02.689152 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:57:02.689159 | orchestrator | 2026-03-07 00:57:02.689167 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-03-07 00:57:02.689175 | orchestrator | Saturday 07 March 2026 00:56:58 +0000 (0:00:01.199) 0:07:36.076 ******** 2026-03-07 00:57:02.689182 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:57:02.689190 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:57:02.689197 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:57:02.689205 | orchestrator | 2026-03-07 00:57:02.689213 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:57:02.689226 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-07 00:57:02.689235 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-07 00:57:02.689243 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-07 00:57:02.689251 | orchestrator | 2026-03-07 00:57:02.689259 | orchestrator | 2026-03-07 00:57:02.689266 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:57:02.689274 | orchestrator | Saturday 07 March 2026 00:56:59 +0000 (0:00:00.824) 0:07:36.900 ******** 2026-03-07 00:57:02.689281 | orchestrator | =============================================================================== 2026-03-07 00:57:02.689294 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 14.17s 2026-03-07 00:57:02.689302 | orchestrator | loadbalancer : Start backup keepalived container ----------------------- 10.02s 2026-03-07 00:57:02.689309 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 9.76s 2026-03-07 00:57:02.689317 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 7.81s 2026-03-07 00:57:02.689325 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 7.13s 2026-03-07 00:57:02.689338 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 6.99s 2026-03-07 00:57:02.689345 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 6.10s 2026-03-07 00:57:02.689352 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.84s 2026-03-07 00:57:02.689359 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 5.74s 2026-03-07 00:57:02.689365 | orchestrator | loadbalancer : Copying over keepalived.conf ----------------------------- 5.42s 2026-03-07 00:57:02.689372 | orchestrator | loadbalancer : Copying over custom haproxy services configuration ------- 5.41s 2026-03-07 00:57:02.689379 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 5.37s 2026-03-07 00:57:02.689386 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 5.31s 2026-03-07 00:57:02.689393 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 5.30s 2026-03-07 00:57:02.689400 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 5.30s 2026-03-07 00:57:02.689407 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 5.27s 2026-03-07 00:57:02.689414 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 4.96s 2026-03-07 00:57:02.689421 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.76s 2026-03-07 00:57:02.689428 | orchestrator | loadbalancer : Remove mariadb.cfg if proxysql enabled ------------------- 4.72s 2026-03-07 00:57:02.689435 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.62s 2026-03-07 00:57:02.689442 | orchestrator | 2026-03-07 00:57:02 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 00:57:02.689449 | orchestrator | 2026-03-07 00:57:02 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:57:02.689456 | orchestrator | 2026-03-07 00:57:02 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:57:05.719410 | orchestrator | 2026-03-07 00:57:05 | INFO  | Task a8b4a5a2-b965-43a3-bd0e-1f7d24c29831 is in state STARTED 2026-03-07 00:57:05.721038 | orchestrator | 2026-03-07 00:57:05 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 00:57:05.724425 | orchestrator | 2026-03-07 00:57:05 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:57:05.724482 | orchestrator | 2026-03-07 00:57:05 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:57:08.774545 | orchestrator | 2026-03-07 00:57:08 | INFO  | Task a8b4a5a2-b965-43a3-bd0e-1f7d24c29831 is in state STARTED 2026-03-07 00:57:08.774705 | orchestrator | 2026-03-07 00:57:08 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 00:57:08.775535 | orchestrator | 2026-03-07 00:57:08 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:57:08.775555 | orchestrator | 2026-03-07 00:57:08 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:57:11.801652 | orchestrator | 2026-03-07 00:57:11 | INFO  | Task a8b4a5a2-b965-43a3-bd0e-1f7d24c29831 is in state STARTED 2026-03-07 00:57:11.802862 | orchestrator | 2026-03-07 00:57:11 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 00:57:11.804020 | orchestrator | 2026-03-07 00:57:11 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:57:11.804046 | orchestrator | 2026-03-07 00:57:11 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:57:14.843378 | orchestrator | 2026-03-07 00:57:14 | INFO  | Task a8b4a5a2-b965-43a3-bd0e-1f7d24c29831 is in state STARTED 2026-03-07 00:57:14.843474 | orchestrator | 2026-03-07 00:57:14 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 00:57:14.844586 | orchestrator | 2026-03-07 00:57:14 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:57:14.844746 | orchestrator | 2026-03-07 00:57:14 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:57:17.869270 | orchestrator | 2026-03-07 00:57:17 | INFO  | Task a8b4a5a2-b965-43a3-bd0e-1f7d24c29831 is in state STARTED 2026-03-07 00:57:17.870436 | orchestrator | 2026-03-07 00:57:17 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 00:57:17.871667 | orchestrator | 2026-03-07 00:57:17 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:57:17.871728 | orchestrator | 2026-03-07 00:57:17 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:57:20.923668 | orchestrator | 2026-03-07 00:57:20 | INFO  | Task a8b4a5a2-b965-43a3-bd0e-1f7d24c29831 is in state STARTED 2026-03-07 00:57:20.923729 | orchestrator | 2026-03-07 00:57:20 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 00:57:20.923740 | orchestrator | 2026-03-07 00:57:20 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:57:20.923749 | orchestrator | 2026-03-07 00:57:20 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:57:23.943968 | orchestrator | 2026-03-07 00:57:23 | INFO  | Task a8b4a5a2-b965-43a3-bd0e-1f7d24c29831 is in state STARTED 2026-03-07 00:57:23.944087 | orchestrator | 2026-03-07 00:57:23 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 00:57:23.945611 | orchestrator | 2026-03-07 00:57:23 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:57:23.945657 | orchestrator | 2026-03-07 00:57:23 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:57:26.982355 | orchestrator | 2026-03-07 00:57:26 | INFO  | Task a8b4a5a2-b965-43a3-bd0e-1f7d24c29831 is in state STARTED 2026-03-07 00:57:26.982467 | orchestrator | 2026-03-07 00:57:26 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 00:57:26.982493 | orchestrator | 2026-03-07 00:57:26 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:57:26.982514 | orchestrator | 2026-03-07 00:57:26 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:57:30.051080 | orchestrator | 2026-03-07 00:57:30 | INFO  | Task a8b4a5a2-b965-43a3-bd0e-1f7d24c29831 is in state STARTED 2026-03-07 00:57:30.055021 | orchestrator | 2026-03-07 00:57:30 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 00:57:30.062157 | orchestrator | 2026-03-07 00:57:30 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:57:30.062230 | orchestrator | 2026-03-07 00:57:30 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:57:33.142242 | orchestrator | 2026-03-07 00:57:33 | INFO  | Task a8b4a5a2-b965-43a3-bd0e-1f7d24c29831 is in state STARTED 2026-03-07 00:57:33.144905 | orchestrator | 2026-03-07 00:57:33 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 00:57:33.147037 | orchestrator | 2026-03-07 00:57:33 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:57:33.147102 | orchestrator | 2026-03-07 00:57:33 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:57:36.202963 | orchestrator | 2026-03-07 00:57:36 | INFO  | Task a8b4a5a2-b965-43a3-bd0e-1f7d24c29831 is in state STARTED 2026-03-07 00:57:36.208976 | orchestrator | 2026-03-07 00:57:36 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 00:57:36.210601 | orchestrator | 2026-03-07 00:57:36 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:57:36.210731 | orchestrator | 2026-03-07 00:57:36 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:57:39.274543 | orchestrator | 2026-03-07 00:57:39 | INFO  | Task a8b4a5a2-b965-43a3-bd0e-1f7d24c29831 is in state STARTED 2026-03-07 00:57:39.276357 | orchestrator | 2026-03-07 00:57:39 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 00:57:39.278528 | orchestrator | 2026-03-07 00:57:39 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:57:39.278713 | orchestrator | 2026-03-07 00:57:39 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:57:42.318288 | orchestrator | 2026-03-07 00:57:42 | INFO  | Task a8b4a5a2-b965-43a3-bd0e-1f7d24c29831 is in state STARTED 2026-03-07 00:57:42.320140 | orchestrator | 2026-03-07 00:57:42 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 00:57:42.321661 | orchestrator | 2026-03-07 00:57:42 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:57:42.321684 | orchestrator | 2026-03-07 00:57:42 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:57:45.358751 | orchestrator | 2026-03-07 00:57:45 | INFO  | Task a8b4a5a2-b965-43a3-bd0e-1f7d24c29831 is in state STARTED 2026-03-07 00:57:45.359519 | orchestrator | 2026-03-07 00:57:45 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 00:57:45.360859 | orchestrator | 2026-03-07 00:57:45 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:57:45.360929 | orchestrator | 2026-03-07 00:57:45 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:57:48.402632 | orchestrator | 2026-03-07 00:57:48 | INFO  | Task a8b4a5a2-b965-43a3-bd0e-1f7d24c29831 is in state STARTED 2026-03-07 00:57:48.406349 | orchestrator | 2026-03-07 00:57:48 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 00:57:48.411270 | orchestrator | 2026-03-07 00:57:48 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:57:48.411356 | orchestrator | 2026-03-07 00:57:48 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:57:51.454221 | orchestrator | 2026-03-07 00:57:51 | INFO  | Task a8b4a5a2-b965-43a3-bd0e-1f7d24c29831 is in state STARTED 2026-03-07 00:57:51.457499 | orchestrator | 2026-03-07 00:57:51 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 00:57:51.460960 | orchestrator | 2026-03-07 00:57:51 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:57:51.461042 | orchestrator | 2026-03-07 00:57:51 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:57:54.511843 | orchestrator | 2026-03-07 00:57:54 | INFO  | Task a8b4a5a2-b965-43a3-bd0e-1f7d24c29831 is in state STARTED 2026-03-07 00:57:54.513982 | orchestrator | 2026-03-07 00:57:54 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 00:57:54.516768 | orchestrator | 2026-03-07 00:57:54 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:57:54.517079 | orchestrator | 2026-03-07 00:57:54 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:57:57.571275 | orchestrator | 2026-03-07 00:57:57 | INFO  | Task a8b4a5a2-b965-43a3-bd0e-1f7d24c29831 is in state STARTED 2026-03-07 00:57:57.573282 | orchestrator | 2026-03-07 00:57:57 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 00:57:57.576711 | orchestrator | 2026-03-07 00:57:57 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:57:57.577283 | orchestrator | 2026-03-07 00:57:57 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:58:00.629967 | orchestrator | 2026-03-07 00:58:00 | INFO  | Task a8b4a5a2-b965-43a3-bd0e-1f7d24c29831 is in state STARTED 2026-03-07 00:58:00.630869 | orchestrator | 2026-03-07 00:58:00 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 00:58:00.632782 | orchestrator | 2026-03-07 00:58:00 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:58:00.633304 | orchestrator | 2026-03-07 00:58:00 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:58:03.677050 | orchestrator | 2026-03-07 00:58:03 | INFO  | Task a8b4a5a2-b965-43a3-bd0e-1f7d24c29831 is in state STARTED 2026-03-07 00:58:03.677702 | orchestrator | 2026-03-07 00:58:03 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 00:58:03.678227 | orchestrator | 2026-03-07 00:58:03 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:58:03.678256 | orchestrator | 2026-03-07 00:58:03 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:58:06.728610 | orchestrator | 2026-03-07 00:58:06 | INFO  | Task a8b4a5a2-b965-43a3-bd0e-1f7d24c29831 is in state STARTED 2026-03-07 00:58:06.729782 | orchestrator | 2026-03-07 00:58:06 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 00:58:06.732851 | orchestrator | 2026-03-07 00:58:06 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:58:06.732916 | orchestrator | 2026-03-07 00:58:06 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:58:09.782310 | orchestrator | 2026-03-07 00:58:09 | INFO  | Task a8b4a5a2-b965-43a3-bd0e-1f7d24c29831 is in state STARTED 2026-03-07 00:58:09.783487 | orchestrator | 2026-03-07 00:58:09 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 00:58:09.785428 | orchestrator | 2026-03-07 00:58:09 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:58:09.785699 | orchestrator | 2026-03-07 00:58:09 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:58:12.842675 | orchestrator | 2026-03-07 00:58:12 | INFO  | Task a8b4a5a2-b965-43a3-bd0e-1f7d24c29831 is in state STARTED 2026-03-07 00:58:12.847493 | orchestrator | 2026-03-07 00:58:12 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 00:58:12.850647 | orchestrator | 2026-03-07 00:58:12 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:58:12.851253 | orchestrator | 2026-03-07 00:58:12 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:58:15.912132 | orchestrator | 2026-03-07 00:58:15 | INFO  | Task a8b4a5a2-b965-43a3-bd0e-1f7d24c29831 is in state STARTED 2026-03-07 00:58:15.913034 | orchestrator | 2026-03-07 00:58:15 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 00:58:15.917070 | orchestrator | 2026-03-07 00:58:15 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:58:15.917143 | orchestrator | 2026-03-07 00:58:15 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:58:18.960472 | orchestrator | 2026-03-07 00:58:18 | INFO  | Task a8b4a5a2-b965-43a3-bd0e-1f7d24c29831 is in state STARTED 2026-03-07 00:58:18.961665 | orchestrator | 2026-03-07 00:58:18 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 00:58:18.962740 | orchestrator | 2026-03-07 00:58:18 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:58:18.962780 | orchestrator | 2026-03-07 00:58:18 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:58:22.023330 | orchestrator | 2026-03-07 00:58:22 | INFO  | Task a8b4a5a2-b965-43a3-bd0e-1f7d24c29831 is in state STARTED 2026-03-07 00:58:22.024359 | orchestrator | 2026-03-07 00:58:22 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 00:58:22.025778 | orchestrator | 2026-03-07 00:58:22 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:58:22.026005 | orchestrator | 2026-03-07 00:58:22 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:58:25.087264 | orchestrator | 2026-03-07 00:58:25 | INFO  | Task a8b4a5a2-b965-43a3-bd0e-1f7d24c29831 is in state STARTED 2026-03-07 00:58:25.088523 | orchestrator | 2026-03-07 00:58:25 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 00:58:25.090093 | orchestrator | 2026-03-07 00:58:25 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:58:25.090149 | orchestrator | 2026-03-07 00:58:25 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:58:28.143058 | orchestrator | 2026-03-07 00:58:28 | INFO  | Task a8b4a5a2-b965-43a3-bd0e-1f7d24c29831 is in state STARTED 2026-03-07 00:58:28.144426 | orchestrator | 2026-03-07 00:58:28 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 00:58:28.145323 | orchestrator | 2026-03-07 00:58:28 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:58:28.145721 | orchestrator | 2026-03-07 00:58:28 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:58:31.196207 | orchestrator | 2026-03-07 00:58:31 | INFO  | Task a8b4a5a2-b965-43a3-bd0e-1f7d24c29831 is in state STARTED 2026-03-07 00:58:31.198235 | orchestrator | 2026-03-07 00:58:31 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 00:58:31.200169 | orchestrator | 2026-03-07 00:58:31 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:58:31.200219 | orchestrator | 2026-03-07 00:58:31 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:58:34.277497 | orchestrator | 2026-03-07 00:58:34 | INFO  | Task a8b4a5a2-b965-43a3-bd0e-1f7d24c29831 is in state STARTED 2026-03-07 00:58:34.280960 | orchestrator | 2026-03-07 00:58:34 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 00:58:34.283553 | orchestrator | 2026-03-07 00:58:34 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:58:34.283618 | orchestrator | 2026-03-07 00:58:34 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:58:37.344303 | orchestrator | 2026-03-07 00:58:37 | INFO  | Task a8b4a5a2-b965-43a3-bd0e-1f7d24c29831 is in state STARTED 2026-03-07 00:58:37.346054 | orchestrator | 2026-03-07 00:58:37 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 00:58:37.348094 | orchestrator | 2026-03-07 00:58:37 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:58:37.348160 | orchestrator | 2026-03-07 00:58:37 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:58:40.390497 | orchestrator | 2026-03-07 00:58:40 | INFO  | Task a8b4a5a2-b965-43a3-bd0e-1f7d24c29831 is in state STARTED 2026-03-07 00:58:40.392091 | orchestrator | 2026-03-07 00:58:40 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 00:58:40.394372 | orchestrator | 2026-03-07 00:58:40 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:58:40.394479 | orchestrator | 2026-03-07 00:58:40 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:58:43.437480 | orchestrator | 2026-03-07 00:58:43 | INFO  | Task a8b4a5a2-b965-43a3-bd0e-1f7d24c29831 is in state STARTED 2026-03-07 00:58:43.438373 | orchestrator | 2026-03-07 00:58:43 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 00:58:43.439395 | orchestrator | 2026-03-07 00:58:43 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:58:43.439439 | orchestrator | 2026-03-07 00:58:43 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:58:46.488516 | orchestrator | 2026-03-07 00:58:46 | INFO  | Task a8b4a5a2-b965-43a3-bd0e-1f7d24c29831 is in state STARTED 2026-03-07 00:58:46.489542 | orchestrator | 2026-03-07 00:58:46 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 00:58:46.490741 | orchestrator | 2026-03-07 00:58:46 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:58:46.490775 | orchestrator | 2026-03-07 00:58:46 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:58:49.542411 | orchestrator | 2026-03-07 00:58:49 | INFO  | Task a8b4a5a2-b965-43a3-bd0e-1f7d24c29831 is in state STARTED 2026-03-07 00:58:49.545276 | orchestrator | 2026-03-07 00:58:49 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 00:58:49.548260 | orchestrator | 2026-03-07 00:58:49 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:58:49.548316 | orchestrator | 2026-03-07 00:58:49 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:58:52.590459 | orchestrator | 2026-03-07 00:58:52 | INFO  | Task a8b4a5a2-b965-43a3-bd0e-1f7d24c29831 is in state STARTED 2026-03-07 00:58:52.593626 | orchestrator | 2026-03-07 00:58:52 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 00:58:52.595264 | orchestrator | 2026-03-07 00:58:52 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:58:52.595312 | orchestrator | 2026-03-07 00:58:52 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:58:55.633201 | orchestrator | 2026-03-07 00:58:55 | INFO  | Task a8b4a5a2-b965-43a3-bd0e-1f7d24c29831 is in state STARTED 2026-03-07 00:58:55.635342 | orchestrator | 2026-03-07 00:58:55 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 00:58:55.636902 | orchestrator | 2026-03-07 00:58:55 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:58:55.637159 | orchestrator | 2026-03-07 00:58:55 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:58:58.689945 | orchestrator | 2026-03-07 00:58:58 | INFO  | Task a8b4a5a2-b965-43a3-bd0e-1f7d24c29831 is in state STARTED 2026-03-07 00:58:58.692959 | orchestrator | 2026-03-07 00:58:58 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 00:58:58.695406 | orchestrator | 2026-03-07 00:58:58 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state STARTED 2026-03-07 00:58:58.695668 | orchestrator | 2026-03-07 00:58:58 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:59:01.751415 | orchestrator | 2026-03-07 00:59:01 | INFO  | Task f3ef2882-100c-4207-b598-42bcc552901a is in state STARTED 2026-03-07 00:59:01.751548 | orchestrator | 2026-03-07 00:59:01 | INFO  | Task a8b4a5a2-b965-43a3-bd0e-1f7d24c29831 is in state STARTED 2026-03-07 00:59:01.752812 | orchestrator | 2026-03-07 00:59:01 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 00:59:01.759095 | orchestrator | 2026-03-07 00:59:01 | INFO  | Task 32e22789-ecc8-4400-8b9b-908f4ffc1f33 is in state SUCCESS 2026-03-07 00:59:01.760094 | orchestrator | 2026-03-07 00:59:01.760136 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-07 00:59:01.760150 | orchestrator | 2.16.14 2026-03-07 00:59:01.760316 | orchestrator | 2026-03-07 00:59:01.760633 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-03-07 00:59:01.760653 | orchestrator | 2026-03-07 00:59:01.760667 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-07 00:59:01.760681 | orchestrator | Saturday 07 March 2026 00:46:45 +0000 (0:00:00.787) 0:00:00.787 ******** 2026-03-07 00:59:01.760695 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:59:01.760907 | orchestrator | 2026-03-07 00:59:01.760923 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-07 00:59:01.760937 | orchestrator | Saturday 07 March 2026 00:46:47 +0000 (0:00:01.309) 0:00:02.096 ******** 2026-03-07 00:59:01.760950 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.761035 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.761052 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.761065 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.761096 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.761111 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.761262 | orchestrator | 2026-03-07 00:59:01.761282 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-07 00:59:01.761295 | orchestrator | Saturday 07 March 2026 00:46:48 +0000 (0:00:01.748) 0:00:03.845 ******** 2026-03-07 00:59:01.761308 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.761321 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.761333 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.761754 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.761776 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.761790 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.761803 | orchestrator | 2026-03-07 00:59:01.761817 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-07 00:59:01.761830 | orchestrator | Saturday 07 March 2026 00:46:49 +0000 (0:00:00.938) 0:00:04.783 ******** 2026-03-07 00:59:01.761843 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.761878 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.761892 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.761905 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.761917 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.761929 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.761943 | orchestrator | 2026-03-07 00:59:01.761957 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-07 00:59:01.761971 | orchestrator | Saturday 07 March 2026 00:46:50 +0000 (0:00:00.928) 0:00:05.711 ******** 2026-03-07 00:59:01.761984 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.761998 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.762011 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.762179 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.762195 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.762208 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.762221 | orchestrator | 2026-03-07 00:59:01.762235 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-07 00:59:01.762249 | orchestrator | Saturday 07 March 2026 00:46:51 +0000 (0:00:00.761) 0:00:06.473 ******** 2026-03-07 00:59:01.762304 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.762319 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.762328 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.762376 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.762617 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.762626 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.762634 | orchestrator | 2026-03-07 00:59:01.762642 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-07 00:59:01.762650 | orchestrator | Saturday 07 March 2026 00:46:51 +0000 (0:00:00.597) 0:00:07.070 ******** 2026-03-07 00:59:01.762658 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.762666 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.762673 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.762698 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.762756 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.762766 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.762774 | orchestrator | 2026-03-07 00:59:01.762782 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-07 00:59:01.762790 | orchestrator | Saturday 07 March 2026 00:46:52 +0000 (0:00:00.953) 0:00:08.024 ******** 2026-03-07 00:59:01.762798 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.762807 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.762815 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.763111 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.763130 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.763138 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.763146 | orchestrator | 2026-03-07 00:59:01.763154 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-07 00:59:01.763163 | orchestrator | Saturday 07 March 2026 00:46:53 +0000 (0:00:00.718) 0:00:08.743 ******** 2026-03-07 00:59:01.763171 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.763178 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.763186 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.763194 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.763202 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.763209 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.763217 | orchestrator | 2026-03-07 00:59:01.763225 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-07 00:59:01.763233 | orchestrator | Saturday 07 March 2026 00:46:55 +0000 (0:00:01.646) 0:00:10.389 ******** 2026-03-07 00:59:01.763241 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-07 00:59:01.763249 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-07 00:59:01.763257 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-07 00:59:01.763265 | orchestrator | 2026-03-07 00:59:01.763272 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-07 00:59:01.763280 | orchestrator | Saturday 07 March 2026 00:46:56 +0000 (0:00:00.960) 0:00:11.350 ******** 2026-03-07 00:59:01.763288 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.763296 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.763304 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.763386 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.763395 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.763402 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.763409 | orchestrator | 2026-03-07 00:59:01.763420 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-07 00:59:01.763434 | orchestrator | Saturday 07 March 2026 00:46:57 +0000 (0:00:01.046) 0:00:12.396 ******** 2026-03-07 00:59:01.763500 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-07 00:59:01.763514 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-07 00:59:01.763523 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-07 00:59:01.763534 | orchestrator | 2026-03-07 00:59:01.763545 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-07 00:59:01.763556 | orchestrator | Saturday 07 March 2026 00:46:59 +0000 (0:00:02.667) 0:00:15.064 ******** 2026-03-07 00:59:01.763567 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-07 00:59:01.763670 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-07 00:59:01.763694 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-07 00:59:01.763706 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.763823 | orchestrator | 2026-03-07 00:59:01.763839 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-07 00:59:01.763871 | orchestrator | Saturday 07 March 2026 00:47:01 +0000 (0:00:01.339) 0:00:16.403 ******** 2026-03-07 00:59:01.764462 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.764484 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.764492 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.764499 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.764506 | orchestrator | 2026-03-07 00:59:01.764513 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-07 00:59:01.764520 | orchestrator | Saturday 07 March 2026 00:47:02 +0000 (0:00:01.239) 0:00:17.642 ******** 2026-03-07 00:59:01.764528 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.764538 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.764545 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.764552 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.764559 | orchestrator | 2026-03-07 00:59:01.764566 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-07 00:59:01.764572 | orchestrator | Saturday 07 March 2026 00:47:03 +0000 (0:00:00.688) 0:00:18.331 ******** 2026-03-07 00:59:01.764895 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-07 00:46:58.041938', 'end': '2026-03-07 00:46:58.137054', 'delta': '0:00:00.095116', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.765012 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-07 00:46:58.853693', 'end': '2026-03-07 00:46:58.953710', 'delta': '0:00:00.100017', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.765029 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-07 00:46:59.663713', 'end': '2026-03-07 00:46:59.767213', 'delta': '0:00:00.103500', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.765037 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.765044 | orchestrator | 2026-03-07 00:59:01.765051 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-07 00:59:01.765057 | orchestrator | Saturday 07 March 2026 00:47:03 +0000 (0:00:00.421) 0:00:18.752 ******** 2026-03-07 00:59:01.765064 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.765072 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.765078 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.765085 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.765092 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.765099 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.765105 | orchestrator | 2026-03-07 00:59:01.765112 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-07 00:59:01.765118 | orchestrator | Saturday 07 March 2026 00:47:05 +0000 (0:00:02.229) 0:00:20.981 ******** 2026-03-07 00:59:01.765125 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-07 00:59:01.765132 | orchestrator | 2026-03-07 00:59:01.765139 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-07 00:59:01.765145 | orchestrator | Saturday 07 March 2026 00:47:06 +0000 (0:00:00.674) 0:00:21.656 ******** 2026-03-07 00:59:01.765152 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.765159 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.765165 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.765172 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.765179 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.765185 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.765192 | orchestrator | 2026-03-07 00:59:01.765198 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-07 00:59:01.765268 | orchestrator | Saturday 07 March 2026 00:47:07 +0000 (0:00:01.407) 0:00:23.063 ******** 2026-03-07 00:59:01.765275 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.765282 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.765289 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.765296 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.766125 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.766240 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.766255 | orchestrator | 2026-03-07 00:59:01.766267 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-07 00:59:01.766280 | orchestrator | Saturday 07 March 2026 00:47:10 +0000 (0:00:02.473) 0:00:25.537 ******** 2026-03-07 00:59:01.766291 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.766303 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.766315 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.766326 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.766338 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.766350 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.766362 | orchestrator | 2026-03-07 00:59:01.766374 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-07 00:59:01.766589 | orchestrator | Saturday 07 March 2026 00:47:12 +0000 (0:00:01.606) 0:00:27.144 ******** 2026-03-07 00:59:01.766601 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.766613 | orchestrator | 2026-03-07 00:59:01.766624 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-07 00:59:01.766636 | orchestrator | Saturday 07 March 2026 00:47:12 +0000 (0:00:00.201) 0:00:27.345 ******** 2026-03-07 00:59:01.766646 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.766657 | orchestrator | 2026-03-07 00:59:01.766668 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-07 00:59:01.766679 | orchestrator | Saturday 07 March 2026 00:47:12 +0000 (0:00:00.475) 0:00:27.821 ******** 2026-03-07 00:59:01.766690 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.766698 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.766705 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.766790 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.766800 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.766807 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.766814 | orchestrator | 2026-03-07 00:59:01.766821 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-07 00:59:01.766827 | orchestrator | Saturday 07 March 2026 00:47:14 +0000 (0:00:01.428) 0:00:29.250 ******** 2026-03-07 00:59:01.766834 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.766841 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.766847 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.766881 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.766890 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.766897 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.766904 | orchestrator | 2026-03-07 00:59:01.766910 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-07 00:59:01.766917 | orchestrator | Saturday 07 March 2026 00:47:15 +0000 (0:00:01.676) 0:00:30.926 ******** 2026-03-07 00:59:01.766924 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.766931 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.766937 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.766944 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.766958 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.766965 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.766971 | orchestrator | 2026-03-07 00:59:01.766978 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-07 00:59:01.766985 | orchestrator | Saturday 07 March 2026 00:47:16 +0000 (0:00:01.110) 0:00:32.037 ******** 2026-03-07 00:59:01.766991 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.766998 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.767004 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.767011 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.767018 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.767024 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.767031 | orchestrator | 2026-03-07 00:59:01.767038 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-07 00:59:01.767045 | orchestrator | Saturday 07 March 2026 00:47:18 +0000 (0:00:01.677) 0:00:33.714 ******** 2026-03-07 00:59:01.767051 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.767058 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.767064 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.767071 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.767077 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.767084 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.767091 | orchestrator | 2026-03-07 00:59:01.767097 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-07 00:59:01.767104 | orchestrator | Saturday 07 March 2026 00:47:19 +0000 (0:00:00.700) 0:00:34.414 ******** 2026-03-07 00:59:01.767118 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.767125 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.767132 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.767138 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.767145 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.767151 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.767158 | orchestrator | 2026-03-07 00:59:01.767165 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-07 00:59:01.767171 | orchestrator | Saturday 07 March 2026 00:47:20 +0000 (0:00:00.976) 0:00:35.391 ******** 2026-03-07 00:59:01.767178 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.767185 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.767196 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.767207 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.767218 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.767229 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.767240 | orchestrator | 2026-03-07 00:59:01.767252 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-07 00:59:01.767263 | orchestrator | Saturday 07 March 2026 00:47:21 +0000 (0:00:00.762) 0:00:36.153 ******** 2026-03-07 00:59:01.767275 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3529c73b--8337--5a09--bb85--f9958b3a6115-osd--block--3529c73b--8337--5a09--bb85--f9958b3a6115', 'dm-uuid-LVM-G0E8Zuq5yuVlrHw9a1He7gOIdUDQ5vRvDav2cdc2yUdKDp0kFFHzFFNxbbbIx2cl'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-07 00:59:01.767289 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5644fa9a--696a--5a4b--ae2f--cbc58e712aba-osd--block--5644fa9a--696a--5a4b--ae2f--cbc58e712aba', 'dm-uuid-LVM-dhDW2UCAexsGjSiFebxoizRulGlPS4gKsFjbX1boFDhq9isN1VoVpNR4Bh2837W9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-07 00:59:01.767371 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:59:01.767390 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:59:01.767408 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:59:01.767421 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:59:01.767444 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:59:01.767455 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:59:01.767466 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:59:01.767478 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:59:01.767574 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e6318f9-11cb-4ed8-b0fb-e89153e65f2e', 'scsi-SQEMU_QEMU_HARDDISK_9e6318f9-11cb-4ed8-b0fb-e89153e65f2e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e6318f9-11cb-4ed8-b0fb-e89153e65f2e-part1', 'scsi-SQEMU_QEMU_HARDDISK_9e6318f9-11cb-4ed8-b0fb-e89153e65f2e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e6318f9-11cb-4ed8-b0fb-e89153e65f2e-part14', 'scsi-SQEMU_QEMU_HARDDISK_9e6318f9-11cb-4ed8-b0fb-e89153e65f2e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e6318f9-11cb-4ed8-b0fb-e89153e65f2e-part15', 'scsi-SQEMU_QEMU_HARDDISK_9e6318f9-11cb-4ed8-b0fb-e89153e65f2e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e6318f9-11cb-4ed8-b0fb-e89153e65f2e-part16', 'scsi-SQEMU_QEMU_HARDDISK_9e6318f9-11cb-4ed8-b0fb-e89153e65f2e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 00:59:01.767600 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--030f8481--3d62--5800--8c17--c22bf68268ab-osd--block--030f8481--3d62--5800--8c17--c22bf68268ab', 'dm-uuid-LVM-ytYYAfTHI2JJN8pIptvTymOsYYxl2nsKzc808rdq6y5Gdjzh4bduZ7BnCE3GrxqB'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-07 00:59:01.767613 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--3529c73b--8337--5a09--bb85--f9958b3a6115-osd--block--3529c73b--8337--5a09--bb85--f9958b3a6115'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-S02Erf-lc84-aEKo-iaps-RrwA-neru-0Ilncq', 'scsi-0QEMU_QEMU_HARDDISK_d799a894-5671-421e-939f-d4a49d05b62b', 'scsi-SQEMU_QEMU_HARDDISK_d799a894-5671-421e-939f-d4a49d05b62b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 00:59:01.767625 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8595c920--fb8d--5336--8a83--206e7467f719-osd--block--8595c920--fb8d--5336--8a83--206e7467f719', 'dm-uuid-LVM-MKoDmalCC26sY8T7Ia0Pupmb1laUrtAAMsccR57JvJ9Pcl0lp5RHWRcj8Ify5bAW'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-07 00:59:01.767638 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--5644fa9a--696a--5a4b--ae2f--cbc58e712aba-osd--block--5644fa9a--696a--5a4b--ae2f--cbc58e712aba'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-e6byjw-4raU-qrnL-AWeA-GErv-hIhn-F6rGTE', 'scsi-0QEMU_QEMU_HARDDISK_beed34ce-a5a1-4e0c-b446-348e6964ce68', 'scsi-SQEMU_QEMU_HARDDISK_beed34ce-a5a1-4e0c-b446-348e6964ce68'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 00:59:01.767712 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:59:01.767757 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99318194-5870-4346-ba42-ca8c5b557f89', 'scsi-SQEMU_QEMU_HARDDISK_99318194-5870-4346-ba42-ca8c5b557f89'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 00:59:01.767780 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-07-00-02-40-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 00:59:01.767792 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:59:01.767804 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:59:01.767816 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:59:01.767828 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6dc70d00--a24c--54e3--88f7--ca23e2f9592d-osd--block--6dc70d00--a24c--54e3--88f7--ca23e2f9592d', 'dm-uuid-LVM-jkgqALCR248QwEh8evGRjlqVGySWdBdbNaJY1aOUgbEjt6zlhDkXD7FZlYZulSsu'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-07 00:59:01.767841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:59:01.767943 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:59:01.767959 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3960461f--aa79--5447--98f8--9395cd95d2e3-osd--block--3960461f--aa79--5447--98f8--9395cd95d2e3', 'dm-uuid-LVM-Iqc4EXlTAo7kndsl0bo8MAuKJ1GjlGC0u2vyA0SVFnmeD66qOH2yKLG7OUWO7NHS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-07 00:59:01.767984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:59:01.767997 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:59:01.768009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:59:01.768021 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.768034 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:59:01.768046 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:59:01.768058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:59:01.768069 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:59:01.768161 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:59:01.768179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:59:01.768207 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e807b00a-8b7b-48ce-9460-0e3636b06250', 'scsi-SQEMU_QEMU_HARDDISK_e807b00a-8b7b-48ce-9460-0e3636b06250'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e807b00a-8b7b-48ce-9460-0e3636b06250-part1', 'scsi-SQEMU_QEMU_HARDDISK_e807b00a-8b7b-48ce-9460-0e3636b06250-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e807b00a-8b7b-48ce-9460-0e3636b06250-part14', 'scsi-SQEMU_QEMU_HARDDISK_e807b00a-8b7b-48ce-9460-0e3636b06250-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e807b00a-8b7b-48ce-9460-0e3636b06250-part15', 'scsi-SQEMU_QEMU_HARDDISK_e807b00a-8b7b-48ce-9460-0e3636b06250-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e807b00a-8b7b-48ce-9460-0e3636b06250-part16', 'scsi-SQEMU_QEMU_HARDDISK_e807b00a-8b7b-48ce-9460-0e3636b06250-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 00:59:01.768221 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:59:01.768233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:59:01.768244 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:59:01.768320 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:59:01.768343 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--030f8481--3d62--5800--8c17--c22bf68268ab-osd--block--030f8481--3d62--5800--8c17--c22bf68268ab'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tJMgzR-FH3c-VJN8-t3LR-mjCg-cB1e-k3f88q', 'scsi-0QEMU_QEMU_HARDDISK_af4ba259-cb6f-4fcf-8c2a-944dae969065', 'scsi-SQEMU_QEMU_HARDDISK_af4ba259-cb6f-4fcf-8c2a-944dae969065'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 00:59:01.768351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:59:01.768358 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:59:01.768365 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:59:01.768371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:59:01.768378 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:59:01.768385 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--8595c920--fb8d--5336--8a83--206e7467f719-osd--block--8595c920--fb8d--5336--8a83--206e7467f719'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vkkAYW-kdqr-YzMM-mBoy-jz1M-mFAH-9eEkCi', 'scsi-0QEMU_QEMU_HARDDISK_3c3014e6-40a5-4340-97e1-b63d744f1dc5', 'scsi-SQEMU_QEMU_HARDDISK_3c3014e6-40a5-4340-97e1-b63d744f1dc5'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 00:59:01.768453 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea170c0f-a027-4120-b295-61114d65555d', 'scsi-SQEMU_QEMU_HARDDISK_ea170c0f-a027-4120-b295-61114d65555d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea170c0f-a027-4120-b295-61114d65555d-part1', 'scsi-SQEMU_QEMU_HARDDISK_ea170c0f-a027-4120-b295-61114d65555d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea170c0f-a027-4120-b295-61114d65555d-part14', 'scsi-SQEMU_QEMU_HARDDISK_ea170c0f-a027-4120-b295-61114d65555d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea170c0f-a027-4120-b295-61114d65555d-part15', 'scsi-SQEMU_QEMU_HARDDISK_ea170c0f-a027-4120-b295-61114d65555d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea170c0f-a027-4120-b295-61114d65555d-part16', 'scsi-SQEMU_QEMU_HARDDISK_ea170c0f-a027-4120-b295-61114d65555d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 00:59:01.768477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:59:01.768488 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--6dc70d00--a24c--54e3--88f7--ca23e2f9592d-osd--block--6dc70d00--a24c--54e3--88f7--ca23e2f9592d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ZaZ5JV-te9Q-ux0A-aq6c-OwVe-IKBo-dM6h9H', 'scsi-0QEMU_QEMU_HARDDISK_fc232e22-cf7b-4f47-aee0-37a45820ed30', 'scsi-SQEMU_QEMU_HARDDISK_fc232e22-cf7b-4f47-aee0-37a45820ed30'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 00:59:01.768499 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fdacbb0-1c31-482c-97c6-063a331da0fc', 'scsi-SQEMU_QEMU_HARDDISK_4fdacbb0-1c31-482c-97c6-063a331da0fc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 00:59:01.768596 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--3960461f--aa79--5447--98f8--9395cd95d2e3-osd--block--3960461f--aa79--5447--98f8--9395cd95d2e3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-TpUQJo-P6aT-RbXI-AWtd-Rfbr-me5S-2vqAGd', 'scsi-0QEMU_QEMU_HARDDISK_81cf8acf-ab0c-4c96-8ca2-b696b28e7835', 'scsi-SQEMU_QEMU_HARDDISK_81cf8acf-ab0c-4c96-8ca2-b696b28e7835'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 00:59:01.768623 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fa6ba2e8-2fa1-4496-a2d0-ef7dd6ca5952', 'scsi-SQEMU_QEMU_HARDDISK_fa6ba2e8-2fa1-4496-a2d0-ef7dd6ca5952'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 00:59:01.768635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:59:01.768647 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-07-00-02-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 00:59:01.768659 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-07-00-02-36-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 00:59:01.768670 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.768680 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.768692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:59:01.768703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:59:01.768788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71c8fc84-aa22-48e4-a4b3-817a97778daa', 'scsi-SQEMU_QEMU_HARDDISK_71c8fc84-aa22-48e4-a4b3-817a97778daa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71c8fc84-aa22-48e4-a4b3-817a97778daa-part1', 'scsi-SQEMU_QEMU_HARDDISK_71c8fc84-aa22-48e4-a4b3-817a97778daa-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71c8fc84-aa22-48e4-a4b3-817a97778daa-part14', 'scsi-SQEMU_QEMU_HARDDISK_71c8fc84-aa22-48e4-a4b3-817a97778daa-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71c8fc84-aa22-48e4-a4b3-817a97778daa-part15', 'scsi-SQEMU_QEMU_HARDDISK_71c8fc84-aa22-48e4-a4b3-817a97778daa-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71c8fc84-aa22-48e4-a4b3-817a97778daa-part16', 'scsi-SQEMU_QEMU_HARDDISK_71c8fc84-aa22-48e4-a4b3-817a97778daa-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 00:59:01.768809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:59:01.768821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:59:01.768832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-07-00-02-38-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 00:59:01.768844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:59:01.768872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:59:01.768891 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.768986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:59:01.769002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:59:01.769017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:59:01.769027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:59:01.769039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:59:01.769050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d38c21a8-dda9-4fc2-b1ab-cbf9e01f4351', 'scsi-SQEMU_QEMU_HARDDISK_d38c21a8-dda9-4fc2-b1ab-cbf9e01f4351'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d38c21a8-dda9-4fc2-b1ab-cbf9e01f4351-part1', 'scsi-SQEMU_QEMU_HARDDISK_d38c21a8-dda9-4fc2-b1ab-cbf9e01f4351-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d38c21a8-dda9-4fc2-b1ab-cbf9e01f4351-part14', 'scsi-SQEMU_QEMU_HARDDISK_d38c21a8-dda9-4fc2-b1ab-cbf9e01f4351-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d38c21a8-dda9-4fc2-b1ab-cbf9e01f4351-part15', 'scsi-SQEMU_QEMU_HARDDISK_d38c21a8-dda9-4fc2-b1ab-cbf9e01f4351-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d38c21a8-dda9-4fc2-b1ab-cbf9e01f4351-part16', 'scsi-SQEMU_QEMU_HARDDISK_d38c21a8-dda9-4fc2-b1ab-cbf9e01f4351-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 00:59:01.769151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:59:01.769164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-07-00-02-38-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 00:59:01.769176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:59:01.769182 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.769188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 00:59:01.769195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_797fca32-d50c-40a1-babd-cf40b6b01cdf', 'scsi-SQEMU_QEMU_HARDDISK_797fca32-d50c-40a1-babd-cf40b6b01cdf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_797fca32-d50c-40a1-babd-cf40b6b01cdf-part1', 'scsi-SQEMU_QEMU_HARDDISK_797fca32-d50c-40a1-babd-cf40b6b01cdf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_797fca32-d50c-40a1-babd-cf40b6b01cdf-part14', 'scsi-SQEMU_QEMU_HARDDISK_797fca32-d50c-40a1-babd-cf40b6b01cdf-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_797fca32-d50c-40a1-babd-cf40b6b01cdf-part15', 'scsi-SQEMU_QEMU_HARDDISK_797fca32-d50c-40a1-babd-cf40b6b01cdf-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_797fca32-d50c-40a1-babd-cf40b6b01cdf-part16', 'scsi-SQEMU_QEMU_HARDDISK_797fca32-d50c-40a1-babd-cf40b6b01cdf-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 00:59:01.769254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-07-00-02-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 00:59:01.769263 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.769270 | orchestrator | 2026-03-07 00:59:01.769276 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-07 00:59:01.769283 | orchestrator | Saturday 07 March 2026 00:47:22 +0000 (0:00:01.531) 0:00:37.685 ******** 2026-03-07 00:59:01.769294 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3529c73b--8337--5a09--bb85--f9958b3a6115-osd--block--3529c73b--8337--5a09--bb85--f9958b3a6115', 'dm-uuid-LVM-G0E8Zuq5yuVlrHw9a1He7gOIdUDQ5vRvDav2cdc2yUdKDp0kFFHzFFNxbbbIx2cl'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.769301 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5644fa9a--696a--5a4b--ae2f--cbc58e712aba-osd--block--5644fa9a--696a--5a4b--ae2f--cbc58e712aba', 'dm-uuid-LVM-dhDW2UCAexsGjSiFebxoizRulGlPS4gKsFjbX1boFDhq9isN1VoVpNR4Bh2837W9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.769308 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.769315 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--030f8481--3d62--5800--8c17--c22bf68268ab-osd--block--030f8481--3d62--5800--8c17--c22bf68268ab', 'dm-uuid-LVM-ytYYAfTHI2JJN8pIptvTymOsYYxl2nsKzc808rdq6y5Gdjzh4bduZ7BnCE3GrxqB'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.769327 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.769376 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8595c920--fb8d--5336--8a83--206e7467f719-osd--block--8595c920--fb8d--5336--8a83--206e7467f719', 'dm-uuid-LVM-MKoDmalCC26sY8T7Ia0Pupmb1laUrtAAMsccR57JvJ9Pcl0lp5RHWRcj8Ify5bAW'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.769389 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.769399 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.769410 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.769436 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.769456 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.769534 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6dc70d00--a24c--54e3--88f7--ca23e2f9592d-osd--block--6dc70d00--a24c--54e3--88f7--ca23e2f9592d', 'dm-uuid-LVM-jkgqALCR248QwEh8evGRjlqVGySWdBdbNaJY1aOUgbEjt6zlhDkXD7FZlYZulSsu'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.769560 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.769572 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3960461f--aa79--5447--98f8--9395cd95d2e3-osd--block--3960461f--aa79--5447--98f8--9395cd95d2e3', 'dm-uuid-LVM-Iqc4EXlTAo7kndsl0bo8MAuKJ1GjlGC0u2vyA0SVFnmeD66qOH2yKLG7OUWO7NHS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.769583 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.769594 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.769611 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.769700 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.769719 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.769730 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.769741 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.769752 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.769770 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.769781 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.769870 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.769888 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.769899 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.769912 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e807b00a-8b7b-48ce-9460-0e3636b06250', 'scsi-SQEMU_QEMU_HARDDISK_e807b00a-8b7b-48ce-9460-0e3636b06250'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e807b00a-8b7b-48ce-9460-0e3636b06250-part1', 'scsi-SQEMU_QEMU_HARDDISK_e807b00a-8b7b-48ce-9460-0e3636b06250-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e807b00a-8b7b-48ce-9460-0e3636b06250-part14', 'scsi-SQEMU_QEMU_HARDDISK_e807b00a-8b7b-48ce-9460-0e3636b06250-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e807b00a-8b7b-48ce-9460-0e3636b06250-part15', 'scsi-SQEMU_QEMU_HARDDISK_e807b00a-8b7b-48ce-9460-0e3636b06250-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e807b00a-8b7b-48ce-9460-0e3636b06250-part16', 'scsi-SQEMU_QEMU_HARDDISK_e807b00a-8b7b-48ce-9460-0e3636b06250-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.770010 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e6318f9-11cb-4ed8-b0fb-e89153e65f2e', 'scsi-SQEMU_QEMU_HARDDISK_9e6318f9-11cb-4ed8-b0fb-e89153e65f2e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e6318f9-11cb-4ed8-b0fb-e89153e65f2e-part1', 'scsi-SQEMU_QEMU_HARDDISK_9e6318f9-11cb-4ed8-b0fb-e89153e65f2e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e6318f9-11cb-4ed8-b0fb-e89153e65f2e-part14', 'scsi-SQEMU_QEMU_HARDDISK_9e6318f9-11cb-4ed8-b0fb-e89153e65f2e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e6318f9-11cb-4ed8-b0fb-e89153e65f2e-part15', 'scsi-SQEMU_QEMU_HARDDISK_9e6318f9-11cb-4ed8-b0fb-e89153e65f2e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e6318f9-11cb-4ed8-b0fb-e89153e65f2e-part16', 'scsi-SQEMU_QEMU_HARDDISK_9e6318f9-11cb-4ed8-b0fb-e89153e65f2e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.770057 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--030f8481--3d62--5800--8c17--c22bf68268ab-osd--block--030f8481--3d62--5800--8c17--c22bf68268ab'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tJMgzR-FH3c-VJN8-t3LR-mjCg-cB1e-k3f88q', 'scsi-0QEMU_QEMU_HARDDISK_af4ba259-cb6f-4fcf-8c2a-944dae969065', 'scsi-SQEMU_QEMU_HARDDISK_af4ba259-cb6f-4fcf-8c2a-944dae969065'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.770078 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.770151 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--8595c920--fb8d--5336--8a83--206e7467f719-osd--block--8595c920--fb8d--5336--8a83--206e7467f719'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vkkAYW-kdqr-YzMM-mBoy-jz1M-mFAH-9eEkCi', 'scsi-0QEMU_QEMU_HARDDISK_3c3014e6-40a5-4340-97e1-b63d744f1dc5', 'scsi-SQEMU_QEMU_HARDDISK_3c3014e6-40a5-4340-97e1-b63d744f1dc5'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.770171 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--3529c73b--8337--5a09--bb85--f9958b3a6115-osd--block--3529c73b--8337--5a09--bb85--f9958b3a6115'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-S02Erf-lc84-aEKo-iaps-RrwA-neru-0Ilncq', 'scsi-0QEMU_QEMU_HARDDISK_d799a894-5671-421e-939f-d4a49d05b62b', 'scsi-SQEMU_QEMU_HARDDISK_d799a894-5671-421e-939f-d4a49d05b62b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.770182 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.770199 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fdacbb0-1c31-482c-97c6-063a331da0fc', 'scsi-SQEMU_QEMU_HARDDISK_4fdacbb0-1c31-482c-97c6-063a331da0fc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.770211 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.770291 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--5644fa9a--696a--5a4b--ae2f--cbc58e712aba-osd--block--5644fa9a--696a--5a4b--ae2f--cbc58e712aba'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-e6byjw-4raU-qrnL-AWeA-GErv-hIhn-F6rGTE', 'scsi-0QEMU_QEMU_HARDDISK_beed34ce-a5a1-4e0c-b446-348e6964ce68', 'scsi-SQEMU_QEMU_HARDDISK_beed34ce-a5a1-4e0c-b446-348e6964ce68'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.770314 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-07-00-02-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.770325 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.770336 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.770356 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.770367 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99318194-5870-4346-ba42-ca8c5b557f89', 'scsi-SQEMU_QEMU_HARDDISK_99318194-5870-4346-ba42-ca8c5b557f89'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.770443 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea170c0f-a027-4120-b295-61114d65555d', 'scsi-SQEMU_QEMU_HARDDISK_ea170c0f-a027-4120-b295-61114d65555d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea170c0f-a027-4120-b295-61114d65555d-part1', 'scsi-SQEMU_QEMU_HARDDISK_ea170c0f-a027-4120-b295-61114d65555d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea170c0f-a027-4120-b295-61114d65555d-part14', 'scsi-SQEMU_QEMU_HARDDISK_ea170c0f-a027-4120-b295-61114d65555d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea170c0f-a027-4120-b295-61114d65555d-part15', 'scsi-SQEMU_QEMU_HARDDISK_ea170c0f-a027-4120-b295-61114d65555d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea170c0f-a027-4120-b295-61114d65555d-part16', 'scsi-SQEMU_QEMU_HARDDISK_ea170c0f-a027-4120-b295-61114d65555d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.770466 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.770477 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.770489 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-07-00-02-40-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.770636 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.770657 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--6dc70d00--a24c--54e3--88f7--ca23e2f9592d-osd--block--6dc70d00--a24c--54e3--88f7--ca23e2f9592d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ZaZ5JV-te9Q-ux0A-aq6c-OwVe-IKBo-dM6h9H', 'scsi-0QEMU_QEMU_HARDDISK_fc232e22-cf7b-4f47-aee0-37a45820ed30', 'scsi-SQEMU_QEMU_HARDDISK_fc232e22-cf7b-4f47-aee0-37a45820ed30'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.770664 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.770680 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.770687 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--3960461f--aa79--5447--98f8--9395cd95d2e3-osd--block--3960461f--aa79--5447--98f8--9395cd95d2e3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-TpUQJo-P6aT-RbXI-AWtd-Rfbr-me5S-2vqAGd', 'scsi-0QEMU_QEMU_HARDDISK_81cf8acf-ab0c-4c96-8ca2-b696b28e7835', 'scsi-SQEMU_QEMU_HARDDISK_81cf8acf-ab0c-4c96-8ca2-b696b28e7835'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.770738 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fa6ba2e8-2fa1-4496-a2d0-ef7dd6ca5952', 'scsi-SQEMU_QEMU_HARDDISK_fa6ba2e8-2fa1-4496-a2d0-ef7dd6ca5952'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.770753 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71c8fc84-aa22-48e4-a4b3-817a97778daa', 'scsi-SQEMU_QEMU_HARDDISK_71c8fc84-aa22-48e4-a4b3-817a97778daa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71c8fc84-aa22-48e4-a4b3-817a97778daa-part1', 'scsi-SQEMU_QEMU_HARDDISK_71c8fc84-aa22-48e4-a4b3-817a97778daa-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71c8fc84-aa22-48e4-a4b3-817a97778daa-part14', 'scsi-SQEMU_QEMU_HARDDISK_71c8fc84-aa22-48e4-a4b3-817a97778daa-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71c8fc84-aa22-48e4-a4b3-817a97778daa-part15', 'scsi-SQEMU_QEMU_HARDDISK_71c8fc84-aa22-48e4-a4b3-817a97778daa-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71c8fc84-aa22-48e4-a4b3-817a97778daa-part16', 'scsi-SQEMU_QEMU_HARDDISK_71c8fc84-aa22-48e4-a4b3-817a97778daa-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.770766 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.770814 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-07-00-02-36-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.770941 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-07-00-02-38-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.770956 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.770968 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.770981 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.770988 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.770994 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.771023 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.771103 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.771121 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.771131 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.771142 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d38c21a8-dda9-4fc2-b1ab-cbf9e01f4351', 'scsi-SQEMU_QEMU_HARDDISK_d38c21a8-dda9-4fc2-b1ab-cbf9e01f4351'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d38c21a8-dda9-4fc2-b1ab-cbf9e01f4351-part1', 'scsi-SQEMU_QEMU_HARDDISK_d38c21a8-dda9-4fc2-b1ab-cbf9e01f4351-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d38c21a8-dda9-4fc2-b1ab-cbf9e01f4351-part14', 'scsi-SQEMU_QEMU_HARDDISK_d38c21a8-dda9-4fc2-b1ab-cbf9e01f4351-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d38c21a8-dda9-4fc2-b1ab-cbf9e01f4351-part15', 'scsi-SQEMU_QEMU_HARDDISK_d38c21a8-dda9-4fc2-b1ab-cbf9e01f4351-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d38c21a8-dda9-4fc2-b1ab-cbf9e01f4351-part16', 'scsi-SQEMU_QEMU_HARDDISK_d38c21a8-dda9-4fc2-b1ab-cbf9e01f4351-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.771162 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.771173 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.771241 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-07-00-02-38-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.771254 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.771265 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.771298 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.771316 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.771328 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.771339 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.771350 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.771418 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.771436 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.771455 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_797fca32-d50c-40a1-babd-cf40b6b01cdf', 'scsi-SQEMU_QEMU_HARDDISK_797fca32-d50c-40a1-babd-cf40b6b01cdf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_797fca32-d50c-40a1-babd-cf40b6b01cdf-part1', 'scsi-SQEMU_QEMU_HARDDISK_797fca32-d50c-40a1-babd-cf40b6b01cdf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_797fca32-d50c-40a1-babd-cf40b6b01cdf-part14', 'scsi-SQEMU_QEMU_HARDDISK_797fca32-d50c-40a1-babd-cf40b6b01cdf-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_797fca32-d50c-40a1-babd-cf40b6b01cdf-part15', 'scsi-SQEMU_QEMU_HARDDISK_797fca32-d50c-40a1-babd-cf40b6b01cdf-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_797fca32-d50c-40a1-babd-cf40b6b01cdf-part16', 'scsi-SQEMU_QEMU_HARDDISK_797fca32-d50c-40a1-babd-cf40b6b01cdf-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.771467 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-07-00-02-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 00:59:01.771479 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.771490 | orchestrator | 2026-03-07 00:59:01.771572 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-07 00:59:01.771586 | orchestrator | Saturday 07 March 2026 00:47:24 +0000 (0:00:01.678) 0:00:39.363 ******** 2026-03-07 00:59:01.771597 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.771609 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.771619 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.771630 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.771641 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.771651 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.771661 | orchestrator | 2026-03-07 00:59:01.771672 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-07 00:59:01.771691 | orchestrator | Saturday 07 March 2026 00:47:25 +0000 (0:00:01.314) 0:00:40.678 ******** 2026-03-07 00:59:01.771702 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.771712 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.771722 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.771733 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.771743 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.771755 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.771765 | orchestrator | 2026-03-07 00:59:01.771776 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-07 00:59:01.771788 | orchestrator | Saturday 07 March 2026 00:47:26 +0000 (0:00:01.070) 0:00:41.749 ******** 2026-03-07 00:59:01.771804 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.771815 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.771826 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.771837 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.771866 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.771877 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.771887 | orchestrator | 2026-03-07 00:59:01.771897 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-07 00:59:01.771920 | orchestrator | Saturday 07 March 2026 00:47:30 +0000 (0:00:03.403) 0:00:45.152 ******** 2026-03-07 00:59:01.771931 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.771940 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.771951 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.771961 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.771971 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.771980 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.771987 | orchestrator | 2026-03-07 00:59:01.771993 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-07 00:59:01.771999 | orchestrator | Saturday 07 March 2026 00:47:30 +0000 (0:00:00.914) 0:00:46.066 ******** 2026-03-07 00:59:01.772005 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.772011 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.772017 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.772023 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.772029 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.772035 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.772041 | orchestrator | 2026-03-07 00:59:01.772047 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-07 00:59:01.772053 | orchestrator | Saturday 07 March 2026 00:47:32 +0000 (0:00:01.476) 0:00:47.543 ******** 2026-03-07 00:59:01.772060 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.772070 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.772079 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.772089 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.772099 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.772109 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.772120 | orchestrator | 2026-03-07 00:59:01.772130 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-07 00:59:01.772141 | orchestrator | Saturday 07 March 2026 00:47:33 +0000 (0:00:01.319) 0:00:48.863 ******** 2026-03-07 00:59:01.772152 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-07 00:59:01.772162 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-07 00:59:01.772169 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-07 00:59:01.772175 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-07 00:59:01.772181 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-07 00:59:01.772187 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-07 00:59:01.772193 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-07 00:59:01.772200 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-07 00:59:01.772214 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-07 00:59:01.772222 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-03-07 00:59:01.772229 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-07 00:59:01.772236 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-07 00:59:01.772244 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-07 00:59:01.772251 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-03-07 00:59:01.772259 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-03-07 00:59:01.772266 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-03-07 00:59:01.772273 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-07 00:59:01.772281 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-07 00:59:01.772288 | orchestrator | 2026-03-07 00:59:01.772295 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-07 00:59:01.772303 | orchestrator | Saturday 07 March 2026 00:47:38 +0000 (0:00:04.257) 0:00:53.120 ******** 2026-03-07 00:59:01.772311 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-07 00:59:01.772318 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-07 00:59:01.772324 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-07 00:59:01.772330 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-07 00:59:01.772336 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-07 00:59:01.772343 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-07 00:59:01.772354 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.772364 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-07 00:59:01.772449 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-07 00:59:01.772463 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-07 00:59:01.772473 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.772484 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-07 00:59:01.772494 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-07 00:59:01.772505 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-07 00:59:01.772516 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.772526 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-07 00:59:01.772536 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-07 00:59:01.772547 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-07 00:59:01.772557 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.772568 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.772578 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-07 00:59:01.772589 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-07 00:59:01.772599 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-07 00:59:01.772616 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.772627 | orchestrator | 2026-03-07 00:59:01.772638 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-07 00:59:01.772648 | orchestrator | Saturday 07 March 2026 00:47:38 +0000 (0:00:00.827) 0:00:53.948 ******** 2026-03-07 00:59:01.772658 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.772669 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.772679 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.772690 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:59:01.772701 | orchestrator | 2026-03-07 00:59:01.772712 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-07 00:59:01.772723 | orchestrator | Saturday 07 March 2026 00:47:40 +0000 (0:00:01.282) 0:00:55.231 ******** 2026-03-07 00:59:01.772746 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.772757 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.772768 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.772779 | orchestrator | 2026-03-07 00:59:01.772790 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-07 00:59:01.772801 | orchestrator | Saturday 07 March 2026 00:47:40 +0000 (0:00:00.472) 0:00:55.703 ******** 2026-03-07 00:59:01.772812 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.772823 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.772834 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.772845 | orchestrator | 2026-03-07 00:59:01.772911 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-07 00:59:01.772923 | orchestrator | Saturday 07 March 2026 00:47:41 +0000 (0:00:00.451) 0:00:56.154 ******** 2026-03-07 00:59:01.772934 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.772940 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.772946 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.772952 | orchestrator | 2026-03-07 00:59:01.772958 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-07 00:59:01.772964 | orchestrator | Saturday 07 March 2026 00:47:41 +0000 (0:00:00.612) 0:00:56.767 ******** 2026-03-07 00:59:01.772971 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.772977 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.772983 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.772989 | orchestrator | 2026-03-07 00:59:01.772995 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-07 00:59:01.773001 | orchestrator | Saturday 07 March 2026 00:47:42 +0000 (0:00:01.239) 0:00:58.007 ******** 2026-03-07 00:59:01.773007 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-07 00:59:01.773013 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-07 00:59:01.773019 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-07 00:59:01.773025 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.773035 | orchestrator | 2026-03-07 00:59:01.773045 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-07 00:59:01.773056 | orchestrator | Saturday 07 March 2026 00:47:43 +0000 (0:00:00.748) 0:00:58.755 ******** 2026-03-07 00:59:01.773067 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-07 00:59:01.773078 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-07 00:59:01.773089 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-07 00:59:01.773101 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.773109 | orchestrator | 2026-03-07 00:59:01.773117 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-07 00:59:01.773125 | orchestrator | Saturday 07 March 2026 00:47:44 +0000 (0:00:00.457) 0:00:59.212 ******** 2026-03-07 00:59:01.773132 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-07 00:59:01.773140 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-07 00:59:01.773147 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-07 00:59:01.773155 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.773162 | orchestrator | 2026-03-07 00:59:01.773170 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-07 00:59:01.773177 | orchestrator | Saturday 07 March 2026 00:47:44 +0000 (0:00:00.641) 0:00:59.854 ******** 2026-03-07 00:59:01.773184 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.773191 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.773198 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.773206 | orchestrator | 2026-03-07 00:59:01.773213 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-07 00:59:01.773221 | orchestrator | Saturday 07 March 2026 00:47:45 +0000 (0:00:00.782) 0:01:00.637 ******** 2026-03-07 00:59:01.773229 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-07 00:59:01.773244 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-07 00:59:01.773279 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-07 00:59:01.773288 | orchestrator | 2026-03-07 00:59:01.773295 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-07 00:59:01.773302 | orchestrator | Saturday 07 March 2026 00:47:47 +0000 (0:00:01.900) 0:01:02.538 ******** 2026-03-07 00:59:01.773309 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-07 00:59:01.773317 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-07 00:59:01.773324 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-07 00:59:01.773330 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-07 00:59:01.773338 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-07 00:59:01.773345 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-07 00:59:01.773353 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-07 00:59:01.773360 | orchestrator | 2026-03-07 00:59:01.773374 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-07 00:59:01.773381 | orchestrator | Saturday 07 March 2026 00:47:48 +0000 (0:00:01.308) 0:01:03.846 ******** 2026-03-07 00:59:01.773389 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-07 00:59:01.773396 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-07 00:59:01.773403 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-07 00:59:01.773411 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-07 00:59:01.773418 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-07 00:59:01.773424 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-07 00:59:01.773434 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-07 00:59:01.773445 | orchestrator | 2026-03-07 00:59:01.773455 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-07 00:59:01.773465 | orchestrator | Saturday 07 March 2026 00:47:51 +0000 (0:00:02.236) 0:01:06.083 ******** 2026-03-07 00:59:01.773477 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:59:01.773489 | orchestrator | 2026-03-07 00:59:01.773500 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-07 00:59:01.773511 | orchestrator | Saturday 07 March 2026 00:47:52 +0000 (0:00:01.399) 0:01:07.483 ******** 2026-03-07 00:59:01.773522 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:59:01.773531 | orchestrator | 2026-03-07 00:59:01.773537 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-07 00:59:01.773543 | orchestrator | Saturday 07 March 2026 00:47:53 +0000 (0:00:01.557) 0:01:09.040 ******** 2026-03-07 00:59:01.773550 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.773556 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.773562 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.773568 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.773574 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.773580 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.773587 | orchestrator | 2026-03-07 00:59:01.773593 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-07 00:59:01.773599 | orchestrator | Saturday 07 March 2026 00:47:55 +0000 (0:00:01.575) 0:01:10.616 ******** 2026-03-07 00:59:01.773605 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.773617 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.773624 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.773630 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.773636 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.773642 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.773648 | orchestrator | 2026-03-07 00:59:01.773655 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-07 00:59:01.773661 | orchestrator | Saturday 07 March 2026 00:47:56 +0000 (0:00:01.339) 0:01:11.956 ******** 2026-03-07 00:59:01.773667 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.773673 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.773679 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.773686 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.773692 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.773698 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.773704 | orchestrator | 2026-03-07 00:59:01.773710 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-07 00:59:01.773717 | orchestrator | Saturday 07 March 2026 00:47:58 +0000 (0:00:01.941) 0:01:13.898 ******** 2026-03-07 00:59:01.773723 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.773729 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.773735 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.773741 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.773748 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.773754 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.773760 | orchestrator | 2026-03-07 00:59:01.773766 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-07 00:59:01.773772 | orchestrator | Saturday 07 March 2026 00:47:59 +0000 (0:00:01.167) 0:01:15.065 ******** 2026-03-07 00:59:01.773778 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.773785 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.773791 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.773797 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.773803 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.773834 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.773841 | orchestrator | 2026-03-07 00:59:01.773870 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-07 00:59:01.773879 | orchestrator | Saturday 07 March 2026 00:48:02 +0000 (0:00:02.539) 0:01:17.605 ******** 2026-03-07 00:59:01.773886 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.773892 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.773898 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.773904 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.773910 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.773917 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.773923 | orchestrator | 2026-03-07 00:59:01.773929 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-07 00:59:01.773935 | orchestrator | Saturday 07 March 2026 00:48:03 +0000 (0:00:00.923) 0:01:18.529 ******** 2026-03-07 00:59:01.773941 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.773947 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.773954 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.773960 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.773966 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.773972 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.773978 | orchestrator | 2026-03-07 00:59:01.773989 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-07 00:59:01.773995 | orchestrator | Saturday 07 March 2026 00:48:04 +0000 (0:00:01.184) 0:01:19.713 ******** 2026-03-07 00:59:01.774002 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.774008 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.774054 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.774063 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.774069 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.774080 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.774087 | orchestrator | 2026-03-07 00:59:01.774093 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-07 00:59:01.774099 | orchestrator | Saturday 07 March 2026 00:48:06 +0000 (0:00:01.460) 0:01:21.173 ******** 2026-03-07 00:59:01.774105 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.774111 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.774117 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.774123 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.774129 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.774135 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.774141 | orchestrator | 2026-03-07 00:59:01.774148 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-07 00:59:01.774154 | orchestrator | Saturday 07 March 2026 00:48:07 +0000 (0:00:01.714) 0:01:22.888 ******** 2026-03-07 00:59:01.774160 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.774166 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.774172 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.774178 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.774185 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.774191 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.774197 | orchestrator | 2026-03-07 00:59:01.774203 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-07 00:59:01.774209 | orchestrator | Saturday 07 March 2026 00:48:08 +0000 (0:00:00.742) 0:01:23.631 ******** 2026-03-07 00:59:01.774215 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.774222 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.774228 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.774234 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.774240 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.774246 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.774252 | orchestrator | 2026-03-07 00:59:01.774259 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-07 00:59:01.774265 | orchestrator | Saturday 07 March 2026 00:48:09 +0000 (0:00:01.415) 0:01:25.046 ******** 2026-03-07 00:59:01.774271 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.774277 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.774283 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.774289 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.774295 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.774302 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.774308 | orchestrator | 2026-03-07 00:59:01.774314 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-07 00:59:01.774320 | orchestrator | Saturday 07 March 2026 00:48:11 +0000 (0:00:01.581) 0:01:26.627 ******** 2026-03-07 00:59:01.774326 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.774332 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.774338 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.774345 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.774351 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.774357 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.774363 | orchestrator | 2026-03-07 00:59:01.774369 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-07 00:59:01.774376 | orchestrator | Saturday 07 March 2026 00:48:13 +0000 (0:00:01.547) 0:01:28.175 ******** 2026-03-07 00:59:01.774382 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.774388 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.774394 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.774400 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.774406 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.774412 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.774419 | orchestrator | 2026-03-07 00:59:01.774425 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-07 00:59:01.774436 | orchestrator | Saturday 07 March 2026 00:48:13 +0000 (0:00:00.824) 0:01:29.000 ******** 2026-03-07 00:59:01.774456 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.774466 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.774475 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.774486 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.774496 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.774507 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.774517 | orchestrator | 2026-03-07 00:59:01.774529 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-07 00:59:01.774535 | orchestrator | Saturday 07 March 2026 00:48:14 +0000 (0:00:01.048) 0:01:30.049 ******** 2026-03-07 00:59:01.774541 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.774548 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.774554 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.774560 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.774592 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.774599 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.774606 | orchestrator | 2026-03-07 00:59:01.774612 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-07 00:59:01.774618 | orchestrator | Saturday 07 March 2026 00:48:15 +0000 (0:00:00.809) 0:01:30.858 ******** 2026-03-07 00:59:01.774625 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.774631 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.774637 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.774643 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.774649 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.774655 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.774662 | orchestrator | 2026-03-07 00:59:01.774668 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-07 00:59:01.774674 | orchestrator | Saturday 07 March 2026 00:48:17 +0000 (0:00:01.626) 0:01:32.485 ******** 2026-03-07 00:59:01.774681 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.774687 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.774693 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.774699 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.774705 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.774711 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.774717 | orchestrator | 2026-03-07 00:59:01.774729 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-07 00:59:01.774735 | orchestrator | Saturday 07 March 2026 00:48:18 +0000 (0:00:01.103) 0:01:33.589 ******** 2026-03-07 00:59:01.774741 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.774747 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.774753 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.774760 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.774766 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.774772 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.774778 | orchestrator | 2026-03-07 00:59:01.774784 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-07 00:59:01.774790 | orchestrator | Saturday 07 March 2026 00:48:20 +0000 (0:00:01.833) 0:01:35.422 ******** 2026-03-07 00:59:01.774796 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:59:01.774803 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:59:01.774809 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:59:01.774815 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:59:01.774821 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:59:01.774828 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:59:01.774834 | orchestrator | 2026-03-07 00:59:01.774840 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-07 00:59:01.774846 | orchestrator | Saturday 07 March 2026 00:48:23 +0000 (0:00:03.052) 0:01:38.474 ******** 2026-03-07 00:59:01.774878 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:59:01.774885 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:59:01.774891 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:59:01.774897 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:59:01.774909 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:59:01.774915 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:59:01.774921 | orchestrator | 2026-03-07 00:59:01.774928 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-07 00:59:01.774934 | orchestrator | Saturday 07 March 2026 00:48:26 +0000 (0:00:03.098) 0:01:41.573 ******** 2026-03-07 00:59:01.774940 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:59:01.774946 | orchestrator | 2026-03-07 00:59:01.774952 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-07 00:59:01.774959 | orchestrator | Saturday 07 March 2026 00:48:28 +0000 (0:00:01.900) 0:01:43.474 ******** 2026-03-07 00:59:01.774965 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.774971 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.774977 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.774983 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.774990 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.774996 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.775002 | orchestrator | 2026-03-07 00:59:01.775008 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-07 00:59:01.775014 | orchestrator | Saturday 07 March 2026 00:48:29 +0000 (0:00:01.265) 0:01:44.739 ******** 2026-03-07 00:59:01.775021 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.775027 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.775033 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.775039 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.775045 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.775051 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.775058 | orchestrator | 2026-03-07 00:59:01.775064 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-07 00:59:01.775070 | orchestrator | Saturday 07 March 2026 00:48:30 +0000 (0:00:01.095) 0:01:45.835 ******** 2026-03-07 00:59:01.775076 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-07 00:59:01.775082 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-07 00:59:01.775089 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-07 00:59:01.775095 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-07 00:59:01.775101 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-07 00:59:01.775107 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-07 00:59:01.775113 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-07 00:59:01.775119 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-07 00:59:01.775125 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-07 00:59:01.775132 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-07 00:59:01.775158 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-07 00:59:01.775165 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-07 00:59:01.775171 | orchestrator | 2026-03-07 00:59:01.775177 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-07 00:59:01.775183 | orchestrator | Saturday 07 March 2026 00:48:32 +0000 (0:00:01.652) 0:01:47.488 ******** 2026-03-07 00:59:01.775189 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:59:01.775196 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:59:01.775202 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:59:01.775208 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:59:01.775214 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:59:01.775224 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:59:01.775231 | orchestrator | 2026-03-07 00:59:01.775237 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-07 00:59:01.775243 | orchestrator | Saturday 07 March 2026 00:48:34 +0000 (0:00:01.694) 0:01:49.183 ******** 2026-03-07 00:59:01.775249 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.775255 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.775265 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.775271 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.775277 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.775283 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.775289 | orchestrator | 2026-03-07 00:59:01.775295 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-07 00:59:01.775301 | orchestrator | Saturday 07 March 2026 00:48:34 +0000 (0:00:00.801) 0:01:49.984 ******** 2026-03-07 00:59:01.775307 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.775313 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.775319 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.775325 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.775331 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.775338 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.775344 | orchestrator | 2026-03-07 00:59:01.775350 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-07 00:59:01.775356 | orchestrator | Saturday 07 March 2026 00:48:36 +0000 (0:00:01.605) 0:01:51.589 ******** 2026-03-07 00:59:01.775362 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.775368 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.775374 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.775380 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.775386 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.775392 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.775398 | orchestrator | 2026-03-07 00:59:01.775404 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-07 00:59:01.775411 | orchestrator | Saturday 07 March 2026 00:48:37 +0000 (0:00:00.663) 0:01:52.253 ******** 2026-03-07 00:59:01.775417 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:59:01.775423 | orchestrator | 2026-03-07 00:59:01.775429 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-07 00:59:01.775436 | orchestrator | Saturday 07 March 2026 00:48:38 +0000 (0:00:01.516) 0:01:53.770 ******** 2026-03-07 00:59:01.775442 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.775448 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.775458 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.775468 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.775478 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.775488 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.775497 | orchestrator | 2026-03-07 00:59:01.775508 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-07 00:59:01.775519 | orchestrator | Saturday 07 March 2026 00:49:20 +0000 (0:00:42.060) 0:02:35.831 ******** 2026-03-07 00:59:01.775530 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-07 00:59:01.775540 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-07 00:59:01.775550 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-07 00:59:01.775559 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.775565 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-07 00:59:01.775571 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-07 00:59:01.775578 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-07 00:59:01.775589 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.775596 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-07 00:59:01.775602 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-07 00:59:01.775608 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-07 00:59:01.775614 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.775621 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-07 00:59:01.775627 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-07 00:59:01.775633 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-07 00:59:01.775639 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.775645 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-07 00:59:01.775652 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-07 00:59:01.775658 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-07 00:59:01.775664 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.775694 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-07 00:59:01.775701 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-07 00:59:01.775708 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-07 00:59:01.775714 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.775720 | orchestrator | 2026-03-07 00:59:01.775726 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-07 00:59:01.775732 | orchestrator | Saturday 07 March 2026 00:49:21 +0000 (0:00:00.793) 0:02:36.624 ******** 2026-03-07 00:59:01.775739 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.775745 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.775751 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.775757 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.775763 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.775770 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.775776 | orchestrator | 2026-03-07 00:59:01.775782 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-07 00:59:01.775788 | orchestrator | Saturday 07 March 2026 00:49:22 +0000 (0:00:00.695) 0:02:37.320 ******** 2026-03-07 00:59:01.775798 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.775805 | orchestrator | 2026-03-07 00:59:01.775811 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-07 00:59:01.775817 | orchestrator | Saturday 07 March 2026 00:49:22 +0000 (0:00:00.120) 0:02:37.441 ******** 2026-03-07 00:59:01.775823 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.775829 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.775836 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.775842 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.775866 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.775873 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.775880 | orchestrator | 2026-03-07 00:59:01.775886 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-07 00:59:01.775892 | orchestrator | Saturday 07 March 2026 00:49:22 +0000 (0:00:00.634) 0:02:38.075 ******** 2026-03-07 00:59:01.775898 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.775904 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.775910 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.775916 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.775922 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.775929 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.775935 | orchestrator | 2026-03-07 00:59:01.775941 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-07 00:59:01.775953 | orchestrator | Saturday 07 March 2026 00:49:23 +0000 (0:00:00.803) 0:02:38.878 ******** 2026-03-07 00:59:01.775959 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.775965 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.775971 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.775978 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.775984 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.775990 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.775996 | orchestrator | 2026-03-07 00:59:01.776002 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-07 00:59:01.776008 | orchestrator | Saturday 07 March 2026 00:49:24 +0000 (0:00:00.636) 0:02:39.515 ******** 2026-03-07 00:59:01.776015 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.776021 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.776027 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.776033 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.776039 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.776045 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.776052 | orchestrator | 2026-03-07 00:59:01.776058 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-07 00:59:01.776064 | orchestrator | Saturday 07 March 2026 00:49:27 +0000 (0:00:03.557) 0:02:43.073 ******** 2026-03-07 00:59:01.776070 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.776076 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.776082 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.776088 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.776095 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.776101 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.776107 | orchestrator | 2026-03-07 00:59:01.776113 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-07 00:59:01.776119 | orchestrator | Saturday 07 March 2026 00:49:28 +0000 (0:00:00.651) 0:02:43.725 ******** 2026-03-07 00:59:01.776126 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:59:01.776134 | orchestrator | 2026-03-07 00:59:01.776140 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-07 00:59:01.776146 | orchestrator | Saturday 07 March 2026 00:49:29 +0000 (0:00:01.100) 0:02:44.825 ******** 2026-03-07 00:59:01.776152 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.776158 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.776165 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.776171 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.776177 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.776183 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.776189 | orchestrator | 2026-03-07 00:59:01.776195 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-07 00:59:01.776202 | orchestrator | Saturday 07 March 2026 00:49:30 +0000 (0:00:01.014) 0:02:45.840 ******** 2026-03-07 00:59:01.776208 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.776214 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.776220 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.776226 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.776235 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.776246 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.776256 | orchestrator | 2026-03-07 00:59:01.776266 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-07 00:59:01.776276 | orchestrator | Saturday 07 March 2026 00:49:31 +0000 (0:00:00.862) 0:02:46.702 ******** 2026-03-07 00:59:01.776286 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.776295 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.776335 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.776343 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.776350 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.776362 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.776368 | orchestrator | 2026-03-07 00:59:01.776374 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-07 00:59:01.776380 | orchestrator | Saturday 07 March 2026 00:49:32 +0000 (0:00:01.219) 0:02:47.922 ******** 2026-03-07 00:59:01.776386 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.776392 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.776399 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.776405 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.776411 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.776417 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.776423 | orchestrator | 2026-03-07 00:59:01.776429 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-07 00:59:01.776435 | orchestrator | Saturday 07 March 2026 00:49:33 +0000 (0:00:00.851) 0:02:48.774 ******** 2026-03-07 00:59:01.776442 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.776448 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.776454 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.776468 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.776474 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.776481 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.776487 | orchestrator | 2026-03-07 00:59:01.776493 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-07 00:59:01.776499 | orchestrator | Saturday 07 March 2026 00:49:34 +0000 (0:00:01.133) 0:02:49.908 ******** 2026-03-07 00:59:01.776505 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.776511 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.776518 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.776524 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.776530 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.776536 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.776542 | orchestrator | 2026-03-07 00:59:01.776549 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-07 00:59:01.776555 | orchestrator | Saturday 07 March 2026 00:49:35 +0000 (0:00:00.918) 0:02:50.826 ******** 2026-03-07 00:59:01.776561 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.776567 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.776573 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.776579 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.776585 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.776591 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.776597 | orchestrator | 2026-03-07 00:59:01.776604 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-07 00:59:01.776610 | orchestrator | Saturday 07 March 2026 00:49:36 +0000 (0:00:00.989) 0:02:51.815 ******** 2026-03-07 00:59:01.776616 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.776622 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.776628 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.776634 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.776641 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.776647 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.776653 | orchestrator | 2026-03-07 00:59:01.776659 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-07 00:59:01.776665 | orchestrator | Saturday 07 March 2026 00:49:37 +0000 (0:00:00.736) 0:02:52.552 ******** 2026-03-07 00:59:01.776671 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.776678 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.776684 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.776690 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.776696 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.776702 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.776708 | orchestrator | 2026-03-07 00:59:01.776715 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-07 00:59:01.776726 | orchestrator | Saturday 07 March 2026 00:49:38 +0000 (0:00:01.432) 0:02:53.984 ******** 2026-03-07 00:59:01.776732 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:59:01.776738 | orchestrator | 2026-03-07 00:59:01.776744 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-07 00:59:01.776751 | orchestrator | Saturday 07 March 2026 00:49:40 +0000 (0:00:01.361) 0:02:55.345 ******** 2026-03-07 00:59:01.776757 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-03-07 00:59:01.776763 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-03-07 00:59:01.776770 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-03-07 00:59:01.776776 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-03-07 00:59:01.776782 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-03-07 00:59:01.776788 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-03-07 00:59:01.776795 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-03-07 00:59:01.776801 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-03-07 00:59:01.776807 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-03-07 00:59:01.776813 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-03-07 00:59:01.776819 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-03-07 00:59:01.776825 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-03-07 00:59:01.776832 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-03-07 00:59:01.776838 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-03-07 00:59:01.776844 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-03-07 00:59:01.776888 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-03-07 00:59:01.776895 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-03-07 00:59:01.776901 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-03-07 00:59:01.776929 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-03-07 00:59:01.776937 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-03-07 00:59:01.776943 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-03-07 00:59:01.776950 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-03-07 00:59:01.776956 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-03-07 00:59:01.776962 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-03-07 00:59:01.776968 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-03-07 00:59:01.776975 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-03-07 00:59:01.776981 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-03-07 00:59:01.776987 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-03-07 00:59:01.776993 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-03-07 00:59:01.776999 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-03-07 00:59:01.777006 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-03-07 00:59:01.777016 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-03-07 00:59:01.777022 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-03-07 00:59:01.777028 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-03-07 00:59:01.777035 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-03-07 00:59:01.777041 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-03-07 00:59:01.777047 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-03-07 00:59:01.777054 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-03-07 00:59:01.777122 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-03-07 00:59:01.777128 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-03-07 00:59:01.777133 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-07 00:59:01.777139 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-03-07 00:59:01.777144 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-03-07 00:59:01.777150 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-03-07 00:59:01.777155 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-03-07 00:59:01.777161 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-07 00:59:01.777166 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-07 00:59:01.777171 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-07 00:59:01.777177 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-03-07 00:59:01.777182 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-03-07 00:59:01.777188 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-07 00:59:01.777193 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-03-07 00:59:01.777198 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-07 00:59:01.777204 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-07 00:59:01.777209 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-03-07 00:59:01.777215 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-07 00:59:01.777220 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-07 00:59:01.777226 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-07 00:59:01.777231 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-07 00:59:01.777236 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-07 00:59:01.777242 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-07 00:59:01.777247 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-07 00:59:01.777252 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-07 00:59:01.777258 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-07 00:59:01.777263 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-07 00:59:01.777269 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-07 00:59:01.777274 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-07 00:59:01.777284 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-07 00:59:01.777293 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-07 00:59:01.777302 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-07 00:59:01.777309 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-07 00:59:01.777317 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-07 00:59:01.777325 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-07 00:59:01.777334 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-03-07 00:59:01.777343 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-07 00:59:01.777352 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-07 00:59:01.777387 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-07 00:59:01.777396 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-07 00:59:01.777409 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-03-07 00:59:01.777415 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-07 00:59:01.777420 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-07 00:59:01.777426 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-03-07 00:59:01.777436 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-03-07 00:59:01.777445 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-07 00:59:01.777454 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-07 00:59:01.777463 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-03-07 00:59:01.777472 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-07 00:59:01.777480 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-03-07 00:59:01.777494 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-03-07 00:59:01.777503 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-07 00:59:01.777511 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-03-07 00:59:01.777520 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-07 00:59:01.777529 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-03-07 00:59:01.777538 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-03-07 00:59:01.777547 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-03-07 00:59:01.777557 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-03-07 00:59:01.777566 | orchestrator | 2026-03-07 00:59:01.777574 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-07 00:59:01.777583 | orchestrator | Saturday 07 March 2026 00:49:48 +0000 (0:00:07.971) 0:03:03.317 ******** 2026-03-07 00:59:01.777592 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.777601 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.777608 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.777615 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:59:01.777625 | orchestrator | 2026-03-07 00:59:01.777634 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-07 00:59:01.777643 | orchestrator | Saturday 07 March 2026 00:49:49 +0000 (0:00:01.549) 0:03:04.866 ******** 2026-03-07 00:59:01.777651 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-07 00:59:01.777661 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-07 00:59:01.777670 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-07 00:59:01.777679 | orchestrator | 2026-03-07 00:59:01.777688 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-07 00:59:01.777697 | orchestrator | Saturday 07 March 2026 00:49:50 +0000 (0:00:01.201) 0:03:06.067 ******** 2026-03-07 00:59:01.777705 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-07 00:59:01.777711 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-07 00:59:01.777716 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-07 00:59:01.777722 | orchestrator | 2026-03-07 00:59:01.777727 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-07 00:59:01.777732 | orchestrator | Saturday 07 March 2026 00:49:52 +0000 (0:00:01.676) 0:03:07.744 ******** 2026-03-07 00:59:01.777744 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.777750 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.777755 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.777761 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.777766 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.777772 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.777777 | orchestrator | 2026-03-07 00:59:01.777783 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-07 00:59:01.777788 | orchestrator | Saturday 07 March 2026 00:49:53 +0000 (0:00:00.964) 0:03:08.709 ******** 2026-03-07 00:59:01.777793 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.777799 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.777804 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.777810 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.777815 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.777820 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.777826 | orchestrator | 2026-03-07 00:59:01.777831 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-07 00:59:01.777837 | orchestrator | Saturday 07 March 2026 00:49:55 +0000 (0:00:01.433) 0:03:10.143 ******** 2026-03-07 00:59:01.777842 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.777860 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.777866 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.777871 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.777877 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.777882 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.777888 | orchestrator | 2026-03-07 00:59:01.777921 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-07 00:59:01.777929 | orchestrator | Saturday 07 March 2026 00:49:56 +0000 (0:00:00.984) 0:03:11.128 ******** 2026-03-07 00:59:01.777934 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.777939 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.777945 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.777950 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.777955 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.777961 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.777966 | orchestrator | 2026-03-07 00:59:01.777971 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-07 00:59:01.777977 | orchestrator | Saturday 07 March 2026 00:49:57 +0000 (0:00:01.285) 0:03:12.413 ******** 2026-03-07 00:59:01.777982 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.777987 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.777993 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.777998 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.778003 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.778009 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.778039 | orchestrator | 2026-03-07 00:59:01.778046 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-07 00:59:01.778056 | orchestrator | Saturday 07 March 2026 00:49:58 +0000 (0:00:00.786) 0:03:13.200 ******** 2026-03-07 00:59:01.778061 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.778067 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.778072 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.778077 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.778083 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.778088 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.778094 | orchestrator | 2026-03-07 00:59:01.778099 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-07 00:59:01.778105 | orchestrator | Saturday 07 March 2026 00:49:58 +0000 (0:00:00.873) 0:03:14.073 ******** 2026-03-07 00:59:01.778110 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.778115 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.778128 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.778137 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.778146 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.778154 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.778163 | orchestrator | 2026-03-07 00:59:01.778172 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-07 00:59:01.778180 | orchestrator | Saturday 07 March 2026 00:49:59 +0000 (0:00:00.651) 0:03:14.725 ******** 2026-03-07 00:59:01.778190 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.778199 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.778207 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.778216 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.778226 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.778234 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.778243 | orchestrator | 2026-03-07 00:59:01.778252 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-07 00:59:01.778260 | orchestrator | Saturday 07 March 2026 00:50:00 +0000 (0:00:00.812) 0:03:15.537 ******** 2026-03-07 00:59:01.778268 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.778278 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.778287 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.778296 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.778306 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.778315 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.778323 | orchestrator | 2026-03-07 00:59:01.778332 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-07 00:59:01.778340 | orchestrator | Saturday 07 March 2026 00:50:03 +0000 (0:00:02.933) 0:03:18.470 ******** 2026-03-07 00:59:01.778349 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.778358 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.778367 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.778376 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.778385 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.778393 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.778402 | orchestrator | 2026-03-07 00:59:01.778411 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-07 00:59:01.778420 | orchestrator | Saturday 07 March 2026 00:50:04 +0000 (0:00:01.202) 0:03:19.672 ******** 2026-03-07 00:59:01.778429 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.778437 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.778446 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.778455 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.778464 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.778473 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.778483 | orchestrator | 2026-03-07 00:59:01.778492 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-07 00:59:01.778501 | orchestrator | Saturday 07 March 2026 00:50:05 +0000 (0:00:01.091) 0:03:20.764 ******** 2026-03-07 00:59:01.778510 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.778519 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.778528 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.778538 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.778547 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.778556 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.778565 | orchestrator | 2026-03-07 00:59:01.778575 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-07 00:59:01.778584 | orchestrator | Saturday 07 March 2026 00:50:07 +0000 (0:00:01.896) 0:03:22.661 ******** 2026-03-07 00:59:01.778593 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-07 00:59:01.778603 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-07 00:59:01.778619 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-07 00:59:01.778628 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.778670 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.778681 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.778689 | orchestrator | 2026-03-07 00:59:01.778698 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-07 00:59:01.778707 | orchestrator | Saturday 07 March 2026 00:50:08 +0000 (0:00:00.893) 0:03:23.554 ******** 2026-03-07 00:59:01.778718 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-03-07 00:59:01.778735 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-03-07 00:59:01.778746 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.778756 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-03-07 00:59:01.778767 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-03-07 00:59:01.778777 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-03-07 00:59:01.778787 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-03-07 00:59:01.778797 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.778807 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.778816 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.778826 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.778836 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.778845 | orchestrator | 2026-03-07 00:59:01.778880 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-07 00:59:01.778889 | orchestrator | Saturday 07 March 2026 00:50:09 +0000 (0:00:01.252) 0:03:24.807 ******** 2026-03-07 00:59:01.778897 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.778906 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.778915 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.778924 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.778933 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.778942 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.778951 | orchestrator | 2026-03-07 00:59:01.778960 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-07 00:59:01.778969 | orchestrator | Saturday 07 March 2026 00:50:10 +0000 (0:00:01.128) 0:03:25.935 ******** 2026-03-07 00:59:01.778978 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.778988 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.779001 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.779006 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.779012 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.779017 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.779022 | orchestrator | 2026-03-07 00:59:01.779028 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-07 00:59:01.779034 | orchestrator | Saturday 07 March 2026 00:50:12 +0000 (0:00:01.212) 0:03:27.148 ******** 2026-03-07 00:59:01.779039 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.779044 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.779050 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.779055 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.779063 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.779071 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.779080 | orchestrator | 2026-03-07 00:59:01.779088 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-07 00:59:01.779098 | orchestrator | Saturday 07 March 2026 00:50:13 +0000 (0:00:00.971) 0:03:28.119 ******** 2026-03-07 00:59:01.779107 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.779116 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.779126 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.779135 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.779144 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.779154 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.779163 | orchestrator | 2026-03-07 00:59:01.779172 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-07 00:59:01.779218 | orchestrator | Saturday 07 March 2026 00:50:14 +0000 (0:00:01.381) 0:03:29.500 ******** 2026-03-07 00:59:01.779228 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.779237 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.779246 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.779256 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.779265 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.779274 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.779283 | orchestrator | 2026-03-07 00:59:01.779292 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-07 00:59:01.779302 | orchestrator | Saturday 07 March 2026 00:50:15 +0000 (0:00:00.921) 0:03:30.422 ******** 2026-03-07 00:59:01.779311 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.779320 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.779329 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.779338 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.779348 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.779357 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.779366 | orchestrator | 2026-03-07 00:59:01.779375 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-07 00:59:01.779390 | orchestrator | Saturday 07 March 2026 00:50:16 +0000 (0:00:01.463) 0:03:31.885 ******** 2026-03-07 00:59:01.779399 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-07 00:59:01.779409 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-07 00:59:01.779418 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-07 00:59:01.779427 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.779435 | orchestrator | 2026-03-07 00:59:01.779444 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-07 00:59:01.779452 | orchestrator | Saturday 07 March 2026 00:50:17 +0000 (0:00:00.806) 0:03:32.692 ******** 2026-03-07 00:59:01.779461 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-07 00:59:01.779470 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-07 00:59:01.779479 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-07 00:59:01.779489 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.779506 | orchestrator | 2026-03-07 00:59:01.779515 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-07 00:59:01.779524 | orchestrator | Saturday 07 March 2026 00:50:18 +0000 (0:00:00.622) 0:03:33.315 ******** 2026-03-07 00:59:01.779534 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-07 00:59:01.779543 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-07 00:59:01.779552 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-07 00:59:01.779562 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.779572 | orchestrator | 2026-03-07 00:59:01.779581 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-07 00:59:01.779591 | orchestrator | Saturday 07 March 2026 00:50:18 +0000 (0:00:00.603) 0:03:33.918 ******** 2026-03-07 00:59:01.779601 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.779610 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.779620 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.779630 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.779638 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.779646 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.779655 | orchestrator | 2026-03-07 00:59:01.779664 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-07 00:59:01.779672 | orchestrator | Saturday 07 March 2026 00:50:20 +0000 (0:00:01.235) 0:03:35.154 ******** 2026-03-07 00:59:01.779681 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-07 00:59:01.779691 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-07 00:59:01.779700 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-07 00:59:01.779710 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-03-07 00:59:01.779718 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-03-07 00:59:01.779723 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.779728 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.779734 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-03-07 00:59:01.779739 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.779746 | orchestrator | 2026-03-07 00:59:01.779755 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-07 00:59:01.779763 | orchestrator | Saturday 07 March 2026 00:50:24 +0000 (0:00:04.714) 0:03:39.868 ******** 2026-03-07 00:59:01.779772 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:59:01.779781 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:59:01.779789 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:59:01.779799 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:59:01.779807 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:59:01.779817 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:59:01.779825 | orchestrator | 2026-03-07 00:59:01.779834 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-07 00:59:01.779843 | orchestrator | Saturday 07 March 2026 00:50:29 +0000 (0:00:04.951) 0:03:44.820 ******** 2026-03-07 00:59:01.779869 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:59:01.779879 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:59:01.779888 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:59:01.779894 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:59:01.779900 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:59:01.779905 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:59:01.779910 | orchestrator | 2026-03-07 00:59:01.779916 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-07 00:59:01.779921 | orchestrator | Saturday 07 March 2026 00:50:31 +0000 (0:00:01.493) 0:03:46.314 ******** 2026-03-07 00:59:01.779926 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.779932 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.779937 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.779943 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:59:01.779948 | orchestrator | 2026-03-07 00:59:01.779960 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-07 00:59:01.779992 | orchestrator | Saturday 07 March 2026 00:50:32 +0000 (0:00:01.059) 0:03:47.373 ******** 2026-03-07 00:59:01.779998 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.780004 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.780010 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.780015 | orchestrator | 2026-03-07 00:59:01.780020 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-07 00:59:01.780026 | orchestrator | Saturday 07 March 2026 00:50:32 +0000 (0:00:00.343) 0:03:47.716 ******** 2026-03-07 00:59:01.780031 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:59:01.780037 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:59:01.780042 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:59:01.780048 | orchestrator | 2026-03-07 00:59:01.780053 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-07 00:59:01.780059 | orchestrator | Saturday 07 March 2026 00:50:34 +0000 (0:00:01.637) 0:03:49.354 ******** 2026-03-07 00:59:01.780064 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-07 00:59:01.780070 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-07 00:59:01.780075 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-07 00:59:01.780080 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.780086 | orchestrator | 2026-03-07 00:59:01.780099 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-07 00:59:01.780105 | orchestrator | Saturday 07 March 2026 00:50:35 +0000 (0:00:00.797) 0:03:50.152 ******** 2026-03-07 00:59:01.780110 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.780116 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.780121 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.780126 | orchestrator | 2026-03-07 00:59:01.780132 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-07 00:59:01.780137 | orchestrator | Saturday 07 March 2026 00:50:35 +0000 (0:00:00.411) 0:03:50.563 ******** 2026-03-07 00:59:01.780143 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.780148 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.780153 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.780159 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:59:01.780164 | orchestrator | 2026-03-07 00:59:01.780170 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-07 00:59:01.780175 | orchestrator | Saturday 07 March 2026 00:50:36 +0000 (0:00:01.243) 0:03:51.807 ******** 2026-03-07 00:59:01.780181 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-07 00:59:01.780186 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-07 00:59:01.780191 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-07 00:59:01.780197 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.780202 | orchestrator | 2026-03-07 00:59:01.780208 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-07 00:59:01.780213 | orchestrator | Saturday 07 March 2026 00:50:37 +0000 (0:00:00.485) 0:03:52.292 ******** 2026-03-07 00:59:01.780218 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.780224 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.780229 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.780234 | orchestrator | 2026-03-07 00:59:01.780240 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-07 00:59:01.780245 | orchestrator | Saturday 07 March 2026 00:50:37 +0000 (0:00:00.384) 0:03:52.677 ******** 2026-03-07 00:59:01.780251 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.780256 | orchestrator | 2026-03-07 00:59:01.780262 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-07 00:59:01.780267 | orchestrator | Saturday 07 March 2026 00:50:37 +0000 (0:00:00.289) 0:03:52.966 ******** 2026-03-07 00:59:01.780277 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.780282 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.780288 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.780293 | orchestrator | 2026-03-07 00:59:01.780299 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-07 00:59:01.780304 | orchestrator | Saturday 07 March 2026 00:50:38 +0000 (0:00:00.400) 0:03:53.367 ******** 2026-03-07 00:59:01.780310 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.780315 | orchestrator | 2026-03-07 00:59:01.780321 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-07 00:59:01.780326 | orchestrator | Saturday 07 March 2026 00:50:38 +0000 (0:00:00.231) 0:03:53.598 ******** 2026-03-07 00:59:01.780331 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.780337 | orchestrator | 2026-03-07 00:59:01.780342 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-07 00:59:01.780348 | orchestrator | Saturday 07 March 2026 00:50:38 +0000 (0:00:00.371) 0:03:53.970 ******** 2026-03-07 00:59:01.780353 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.780358 | orchestrator | 2026-03-07 00:59:01.780364 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-07 00:59:01.780369 | orchestrator | Saturday 07 March 2026 00:50:39 +0000 (0:00:00.423) 0:03:54.394 ******** 2026-03-07 00:59:01.780375 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.780380 | orchestrator | 2026-03-07 00:59:01.780386 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-07 00:59:01.780393 | orchestrator | Saturday 07 March 2026 00:50:39 +0000 (0:00:00.261) 0:03:54.655 ******** 2026-03-07 00:59:01.780402 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.780411 | orchestrator | 2026-03-07 00:59:01.780420 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-07 00:59:01.780429 | orchestrator | Saturday 07 March 2026 00:50:39 +0000 (0:00:00.239) 0:03:54.895 ******** 2026-03-07 00:59:01.780438 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-07 00:59:01.780446 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-07 00:59:01.780455 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-07 00:59:01.780465 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.780474 | orchestrator | 2026-03-07 00:59:01.780483 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-07 00:59:01.780518 | orchestrator | Saturday 07 March 2026 00:50:40 +0000 (0:00:00.456) 0:03:55.352 ******** 2026-03-07 00:59:01.780529 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.780538 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.780547 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.780556 | orchestrator | 2026-03-07 00:59:01.780565 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-07 00:59:01.780574 | orchestrator | Saturday 07 March 2026 00:50:40 +0000 (0:00:00.405) 0:03:55.757 ******** 2026-03-07 00:59:01.780584 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.780592 | orchestrator | 2026-03-07 00:59:01.780601 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-07 00:59:01.780609 | orchestrator | Saturday 07 March 2026 00:50:40 +0000 (0:00:00.291) 0:03:56.049 ******** 2026-03-07 00:59:01.780619 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.780627 | orchestrator | 2026-03-07 00:59:01.780636 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-07 00:59:01.780645 | orchestrator | Saturday 07 March 2026 00:50:41 +0000 (0:00:00.307) 0:03:56.356 ******** 2026-03-07 00:59:01.780654 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.780663 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.780677 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.780687 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:59:01.780697 | orchestrator | 2026-03-07 00:59:01.780712 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-07 00:59:01.780722 | orchestrator | Saturday 07 March 2026 00:50:42 +0000 (0:00:01.331) 0:03:57.687 ******** 2026-03-07 00:59:01.780731 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.780740 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.780749 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.780759 | orchestrator | 2026-03-07 00:59:01.780768 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-07 00:59:01.780777 | orchestrator | Saturday 07 March 2026 00:50:42 +0000 (0:00:00.372) 0:03:58.060 ******** 2026-03-07 00:59:01.780786 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:59:01.780795 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:59:01.780804 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:59:01.780814 | orchestrator | 2026-03-07 00:59:01.780823 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-07 00:59:01.780832 | orchestrator | Saturday 07 March 2026 00:50:44 +0000 (0:00:01.626) 0:03:59.686 ******** 2026-03-07 00:59:01.780841 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-07 00:59:01.780892 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-07 00:59:01.780903 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-07 00:59:01.780914 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.780923 | orchestrator | 2026-03-07 00:59:01.780933 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-07 00:59:01.780943 | orchestrator | Saturday 07 March 2026 00:50:45 +0000 (0:00:01.269) 0:04:00.955 ******** 2026-03-07 00:59:01.780953 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.780962 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.780972 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.780982 | orchestrator | 2026-03-07 00:59:01.780991 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-07 00:59:01.781000 | orchestrator | Saturday 07 March 2026 00:50:46 +0000 (0:00:00.703) 0:04:01.659 ******** 2026-03-07 00:59:01.781009 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.781017 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.781026 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.781035 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:59:01.781045 | orchestrator | 2026-03-07 00:59:01.781054 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-07 00:59:01.781064 | orchestrator | Saturday 07 March 2026 00:50:47 +0000 (0:00:01.067) 0:04:02.727 ******** 2026-03-07 00:59:01.781069 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.781075 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.781080 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.781086 | orchestrator | 2026-03-07 00:59:01.781091 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-07 00:59:01.781096 | orchestrator | Saturday 07 March 2026 00:50:48 +0000 (0:00:00.750) 0:04:03.477 ******** 2026-03-07 00:59:01.781102 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:59:01.781107 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:59:01.781113 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:59:01.781118 | orchestrator | 2026-03-07 00:59:01.781124 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-07 00:59:01.781129 | orchestrator | Saturday 07 March 2026 00:50:49 +0000 (0:00:01.287) 0:04:04.765 ******** 2026-03-07 00:59:01.781134 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-07 00:59:01.781140 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-07 00:59:01.781145 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-07 00:59:01.781151 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.781156 | orchestrator | 2026-03-07 00:59:01.781161 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-07 00:59:01.781175 | orchestrator | Saturday 07 March 2026 00:50:50 +0000 (0:00:00.742) 0:04:05.507 ******** 2026-03-07 00:59:01.781184 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.781192 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.781201 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.781211 | orchestrator | 2026-03-07 00:59:01.781220 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-03-07 00:59:01.781229 | orchestrator | Saturday 07 March 2026 00:50:50 +0000 (0:00:00.399) 0:04:05.907 ******** 2026-03-07 00:59:01.781238 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.781247 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.781253 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.781258 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.781263 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.781296 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.781302 | orchestrator | 2026-03-07 00:59:01.781308 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-07 00:59:01.781313 | orchestrator | Saturday 07 March 2026 00:50:51 +0000 (0:00:01.151) 0:04:07.058 ******** 2026-03-07 00:59:01.781318 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.781324 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.781329 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.781335 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:59:01.781340 | orchestrator | 2026-03-07 00:59:01.781346 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-07 00:59:01.781351 | orchestrator | Saturday 07 March 2026 00:50:52 +0000 (0:00:00.933) 0:04:07.992 ******** 2026-03-07 00:59:01.781357 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.781362 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.781367 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.781373 | orchestrator | 2026-03-07 00:59:01.781378 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-07 00:59:01.781388 | orchestrator | Saturday 07 March 2026 00:50:53 +0000 (0:00:00.767) 0:04:08.759 ******** 2026-03-07 00:59:01.781394 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:59:01.781399 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:59:01.781405 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:59:01.781410 | orchestrator | 2026-03-07 00:59:01.781416 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-07 00:59:01.781421 | orchestrator | Saturday 07 March 2026 00:50:55 +0000 (0:00:01.348) 0:04:10.108 ******** 2026-03-07 00:59:01.781429 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-07 00:59:01.781436 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-07 00:59:01.781444 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-07 00:59:01.781456 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.781467 | orchestrator | 2026-03-07 00:59:01.781474 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-07 00:59:01.781481 | orchestrator | Saturday 07 March 2026 00:50:55 +0000 (0:00:00.763) 0:04:10.871 ******** 2026-03-07 00:59:01.781489 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.781496 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.781503 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.781511 | orchestrator | 2026-03-07 00:59:01.781518 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-03-07 00:59:01.781525 | orchestrator | 2026-03-07 00:59:01.781532 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-07 00:59:01.781540 | orchestrator | Saturday 07 March 2026 00:50:56 +0000 (0:00:01.147) 0:04:12.018 ******** 2026-03-07 00:59:01.781548 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:59:01.781556 | orchestrator | 2026-03-07 00:59:01.781563 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-07 00:59:01.781578 | orchestrator | Saturday 07 March 2026 00:50:57 +0000 (0:00:00.689) 0:04:12.708 ******** 2026-03-07 00:59:01.781586 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:59:01.781594 | orchestrator | 2026-03-07 00:59:01.781602 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-07 00:59:01.781609 | orchestrator | Saturday 07 March 2026 00:50:58 +0000 (0:00:00.681) 0:04:13.389 ******** 2026-03-07 00:59:01.781617 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.781624 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.781631 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.781640 | orchestrator | 2026-03-07 00:59:01.781647 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-07 00:59:01.781655 | orchestrator | Saturday 07 March 2026 00:50:59 +0000 (0:00:01.199) 0:04:14.589 ******** 2026-03-07 00:59:01.781662 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.781670 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.781678 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.781686 | orchestrator | 2026-03-07 00:59:01.781693 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-07 00:59:01.781702 | orchestrator | Saturday 07 March 2026 00:50:59 +0000 (0:00:00.348) 0:04:14.937 ******** 2026-03-07 00:59:01.781708 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.781713 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.781718 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.781723 | orchestrator | 2026-03-07 00:59:01.781728 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-07 00:59:01.781733 | orchestrator | Saturday 07 March 2026 00:51:00 +0000 (0:00:00.429) 0:04:15.366 ******** 2026-03-07 00:59:01.781737 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.781742 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.781747 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.781752 | orchestrator | 2026-03-07 00:59:01.781760 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-07 00:59:01.781768 | orchestrator | Saturday 07 March 2026 00:51:00 +0000 (0:00:00.375) 0:04:15.742 ******** 2026-03-07 00:59:01.781775 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.781784 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.781795 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.781803 | orchestrator | 2026-03-07 00:59:01.781810 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-07 00:59:01.781818 | orchestrator | Saturday 07 March 2026 00:51:01 +0000 (0:00:01.261) 0:04:17.003 ******** 2026-03-07 00:59:01.781826 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.781832 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.781840 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.781865 | orchestrator | 2026-03-07 00:59:01.781875 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-07 00:59:01.781883 | orchestrator | Saturday 07 March 2026 00:51:02 +0000 (0:00:00.474) 0:04:17.478 ******** 2026-03-07 00:59:01.781929 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.781936 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.781941 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.781945 | orchestrator | 2026-03-07 00:59:01.781950 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-07 00:59:01.781955 | orchestrator | Saturday 07 March 2026 00:51:02 +0000 (0:00:00.592) 0:04:18.070 ******** 2026-03-07 00:59:01.781960 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.781965 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.781970 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.781975 | orchestrator | 2026-03-07 00:59:01.781980 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-07 00:59:01.781985 | orchestrator | Saturday 07 March 2026 00:51:03 +0000 (0:00:00.950) 0:04:19.021 ******** 2026-03-07 00:59:01.781996 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.782001 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.782006 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.782011 | orchestrator | 2026-03-07 00:59:01.782048 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-07 00:59:01.782053 | orchestrator | Saturday 07 March 2026 00:51:05 +0000 (0:00:01.435) 0:04:20.456 ******** 2026-03-07 00:59:01.782058 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.782068 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.782073 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.782078 | orchestrator | 2026-03-07 00:59:01.782083 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-07 00:59:01.782088 | orchestrator | Saturday 07 March 2026 00:51:05 +0000 (0:00:00.415) 0:04:20.872 ******** 2026-03-07 00:59:01.782092 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.782097 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.782102 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.782107 | orchestrator | 2026-03-07 00:59:01.782111 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-07 00:59:01.782116 | orchestrator | Saturday 07 March 2026 00:51:06 +0000 (0:00:00.478) 0:04:21.350 ******** 2026-03-07 00:59:01.782121 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.782126 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.782131 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.782135 | orchestrator | 2026-03-07 00:59:01.782140 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-07 00:59:01.782145 | orchestrator | Saturday 07 March 2026 00:51:06 +0000 (0:00:00.384) 0:04:21.734 ******** 2026-03-07 00:59:01.782150 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.782154 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.782159 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.782164 | orchestrator | 2026-03-07 00:59:01.782169 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-07 00:59:01.782173 | orchestrator | Saturday 07 March 2026 00:51:07 +0000 (0:00:00.795) 0:04:22.530 ******** 2026-03-07 00:59:01.782178 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.782183 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.782188 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.782192 | orchestrator | 2026-03-07 00:59:01.782197 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-07 00:59:01.782202 | orchestrator | Saturday 07 March 2026 00:51:07 +0000 (0:00:00.359) 0:04:22.890 ******** 2026-03-07 00:59:01.782207 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.782212 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.782216 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.782221 | orchestrator | 2026-03-07 00:59:01.782226 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-07 00:59:01.782231 | orchestrator | Saturday 07 March 2026 00:51:08 +0000 (0:00:00.393) 0:04:23.283 ******** 2026-03-07 00:59:01.782236 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.782240 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.782245 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.782250 | orchestrator | 2026-03-07 00:59:01.782255 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-07 00:59:01.782260 | orchestrator | Saturday 07 March 2026 00:51:08 +0000 (0:00:00.469) 0:04:23.753 ******** 2026-03-07 00:59:01.782264 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.782269 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.782274 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.782279 | orchestrator | 2026-03-07 00:59:01.782283 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-07 00:59:01.782288 | orchestrator | Saturday 07 March 2026 00:51:09 +0000 (0:00:00.394) 0:04:24.147 ******** 2026-03-07 00:59:01.782293 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.782302 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.782307 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.782312 | orchestrator | 2026-03-07 00:59:01.782316 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-07 00:59:01.782321 | orchestrator | Saturday 07 March 2026 00:51:09 +0000 (0:00:00.821) 0:04:24.969 ******** 2026-03-07 00:59:01.782326 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.782331 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.782335 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.782340 | orchestrator | 2026-03-07 00:59:01.782345 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-03-07 00:59:01.782349 | orchestrator | Saturday 07 March 2026 00:51:10 +0000 (0:00:00.624) 0:04:25.593 ******** 2026-03-07 00:59:01.782354 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.782359 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.782363 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.782368 | orchestrator | 2026-03-07 00:59:01.782373 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-03-07 00:59:01.782378 | orchestrator | Saturday 07 March 2026 00:51:10 +0000 (0:00:00.364) 0:04:25.958 ******** 2026-03-07 00:59:01.782383 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:59:01.782387 | orchestrator | 2026-03-07 00:59:01.782392 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-03-07 00:59:01.782397 | orchestrator | Saturday 07 March 2026 00:51:11 +0000 (0:00:01.075) 0:04:27.033 ******** 2026-03-07 00:59:01.782402 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.782407 | orchestrator | 2026-03-07 00:59:01.782429 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-03-07 00:59:01.782435 | orchestrator | Saturday 07 March 2026 00:51:12 +0000 (0:00:00.208) 0:04:27.242 ******** 2026-03-07 00:59:01.782440 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-07 00:59:01.782445 | orchestrator | 2026-03-07 00:59:01.782450 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-03-07 00:59:01.782455 | orchestrator | Saturday 07 March 2026 00:51:13 +0000 (0:00:01.385) 0:04:28.628 ******** 2026-03-07 00:59:01.782460 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.782464 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.782469 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.782474 | orchestrator | 2026-03-07 00:59:01.782478 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-03-07 00:59:01.782483 | orchestrator | Saturday 07 March 2026 00:51:14 +0000 (0:00:00.458) 0:04:29.086 ******** 2026-03-07 00:59:01.782488 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.782493 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.782497 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.782502 | orchestrator | 2026-03-07 00:59:01.782507 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-03-07 00:59:01.782515 | orchestrator | Saturday 07 March 2026 00:51:14 +0000 (0:00:00.935) 0:04:30.021 ******** 2026-03-07 00:59:01.782520 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:59:01.782525 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:59:01.782530 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:59:01.782535 | orchestrator | 2026-03-07 00:59:01.782539 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-03-07 00:59:01.782544 | orchestrator | Saturday 07 March 2026 00:51:16 +0000 (0:00:01.749) 0:04:31.771 ******** 2026-03-07 00:59:01.782549 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:59:01.782554 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:59:01.782558 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:59:01.782563 | orchestrator | 2026-03-07 00:59:01.782568 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-03-07 00:59:01.782573 | orchestrator | Saturday 07 March 2026 00:51:17 +0000 (0:00:00.952) 0:04:32.724 ******** 2026-03-07 00:59:01.782577 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:59:01.782586 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:59:01.782591 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:59:01.782596 | orchestrator | 2026-03-07 00:59:01.782601 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-03-07 00:59:01.782605 | orchestrator | Saturday 07 March 2026 00:51:18 +0000 (0:00:01.041) 0:04:33.765 ******** 2026-03-07 00:59:01.782610 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.782615 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.782620 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.782625 | orchestrator | 2026-03-07 00:59:01.782629 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-03-07 00:59:01.782634 | orchestrator | Saturday 07 March 2026 00:51:19 +0000 (0:00:00.907) 0:04:34.673 ******** 2026-03-07 00:59:01.782639 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:59:01.782644 | orchestrator | 2026-03-07 00:59:01.782649 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-03-07 00:59:01.782653 | orchestrator | Saturday 07 March 2026 00:51:21 +0000 (0:00:02.203) 0:04:36.877 ******** 2026-03-07 00:59:01.782658 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.782663 | orchestrator | 2026-03-07 00:59:01.782668 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-03-07 00:59:01.782672 | orchestrator | Saturday 07 March 2026 00:51:22 +0000 (0:00:00.970) 0:04:37.847 ******** 2026-03-07 00:59:01.782677 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-07 00:59:01.782682 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-07 00:59:01.782687 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-07 00:59:01.782692 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-07 00:59:01.782696 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-07 00:59:01.782701 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-03-07 00:59:01.782706 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-07 00:59:01.782711 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-03-07 00:59:01.782716 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-03-07 00:59:01.782720 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-03-07 00:59:01.782725 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-07 00:59:01.782730 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-03-07 00:59:01.782735 | orchestrator | 2026-03-07 00:59:01.782740 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-03-07 00:59:01.782744 | orchestrator | Saturday 07 March 2026 00:51:26 +0000 (0:00:03.843) 0:04:41.691 ******** 2026-03-07 00:59:01.782749 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:59:01.782754 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:59:01.782758 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:59:01.782763 | orchestrator | 2026-03-07 00:59:01.782769 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-03-07 00:59:01.782773 | orchestrator | Saturday 07 March 2026 00:51:28 +0000 (0:00:01.777) 0:04:43.469 ******** 2026-03-07 00:59:01.782778 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.782783 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.782787 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.782792 | orchestrator | 2026-03-07 00:59:01.782797 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-03-07 00:59:01.782802 | orchestrator | Saturday 07 March 2026 00:51:29 +0000 (0:00:00.708) 0:04:44.177 ******** 2026-03-07 00:59:01.782806 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.782811 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.782816 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.782821 | orchestrator | 2026-03-07 00:59:01.782825 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-03-07 00:59:01.782830 | orchestrator | Saturday 07 March 2026 00:51:30 +0000 (0:00:01.141) 0:04:45.319 ******** 2026-03-07 00:59:01.782839 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:59:01.783003 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:59:01.783031 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:59:01.783036 | orchestrator | 2026-03-07 00:59:01.783041 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-03-07 00:59:01.783046 | orchestrator | Saturday 07 March 2026 00:51:33 +0000 (0:00:02.820) 0:04:48.139 ******** 2026-03-07 00:59:01.783051 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:59:01.783056 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:59:01.783061 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:59:01.783066 | orchestrator | 2026-03-07 00:59:01.783070 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-03-07 00:59:01.783075 | orchestrator | Saturday 07 March 2026 00:51:35 +0000 (0:00:02.555) 0:04:50.694 ******** 2026-03-07 00:59:01.783080 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.783085 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.783089 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.783094 | orchestrator | 2026-03-07 00:59:01.783099 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-03-07 00:59:01.783104 | orchestrator | Saturday 07 March 2026 00:51:36 +0000 (0:00:00.400) 0:04:51.095 ******** 2026-03-07 00:59:01.783113 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:59:01.783118 | orchestrator | 2026-03-07 00:59:01.783123 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-03-07 00:59:01.783128 | orchestrator | Saturday 07 March 2026 00:51:36 +0000 (0:00:00.855) 0:04:51.950 ******** 2026-03-07 00:59:01.783133 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.783138 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.783142 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.783147 | orchestrator | 2026-03-07 00:59:01.783152 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-03-07 00:59:01.783157 | orchestrator | Saturday 07 March 2026 00:51:37 +0000 (0:00:00.356) 0:04:52.306 ******** 2026-03-07 00:59:01.783162 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.783166 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.783171 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.783176 | orchestrator | 2026-03-07 00:59:01.783180 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-03-07 00:59:01.783185 | orchestrator | Saturday 07 March 2026 00:51:37 +0000 (0:00:00.434) 0:04:52.740 ******** 2026-03-07 00:59:01.783190 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-2, testbed-node-1 2026-03-07 00:59:01.783195 | orchestrator | 2026-03-07 00:59:01.783200 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-03-07 00:59:01.783205 | orchestrator | Saturday 07 March 2026 00:51:39 +0000 (0:00:01.353) 0:04:54.094 ******** 2026-03-07 00:59:01.783209 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:59:01.783214 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:59:01.783219 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:59:01.783224 | orchestrator | 2026-03-07 00:59:01.783228 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-03-07 00:59:01.783233 | orchestrator | Saturday 07 March 2026 00:51:43 +0000 (0:00:04.276) 0:04:58.371 ******** 2026-03-07 00:59:01.783238 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:59:01.783243 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:59:01.783247 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:59:01.783252 | orchestrator | 2026-03-07 00:59:01.783257 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-03-07 00:59:01.783262 | orchestrator | Saturday 07 March 2026 00:51:45 +0000 (0:00:02.063) 0:05:00.435 ******** 2026-03-07 00:59:01.783267 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:59:01.783271 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:59:01.783282 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:59:01.783286 | orchestrator | 2026-03-07 00:59:01.783291 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-03-07 00:59:01.783296 | orchestrator | Saturday 07 March 2026 00:51:47 +0000 (0:00:02.296) 0:05:02.732 ******** 2026-03-07 00:59:01.783301 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:59:01.783306 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:59:01.783310 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:59:01.783315 | orchestrator | 2026-03-07 00:59:01.783320 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-03-07 00:59:01.783325 | orchestrator | Saturday 07 March 2026 00:51:50 +0000 (0:00:02.982) 0:05:05.715 ******** 2026-03-07 00:59:01.783329 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:59:01.783334 | orchestrator | 2026-03-07 00:59:01.783339 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-03-07 00:59:01.783344 | orchestrator | Saturday 07 March 2026 00:51:51 +0000 (0:00:00.797) 0:05:06.512 ******** 2026-03-07 00:59:01.783349 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-03-07 00:59:01.783353 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.783358 | orchestrator | 2026-03-07 00:59:01.783363 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-03-07 00:59:01.783368 | orchestrator | Saturday 07 March 2026 00:52:13 +0000 (0:00:22.006) 0:05:28.519 ******** 2026-03-07 00:59:01.783373 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.783377 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.783382 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.783387 | orchestrator | 2026-03-07 00:59:01.783392 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-03-07 00:59:01.783397 | orchestrator | Saturday 07 March 2026 00:52:23 +0000 (0:00:09.761) 0:05:38.280 ******** 2026-03-07 00:59:01.783401 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.783406 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.783411 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.783416 | orchestrator | 2026-03-07 00:59:01.783420 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-03-07 00:59:01.783444 | orchestrator | Saturday 07 March 2026 00:52:23 +0000 (0:00:00.665) 0:05:38.946 ******** 2026-03-07 00:59:01.783451 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1cb26ff7ef75997410f156630b21c12f2da210c6'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-07 00:59:01.783458 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1cb26ff7ef75997410f156630b21c12f2da210c6'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-07 00:59:01.783465 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1cb26ff7ef75997410f156630b21c12f2da210c6'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-07 00:59:01.783471 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1cb26ff7ef75997410f156630b21c12f2da210c6'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-07 00:59:01.783482 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1cb26ff7ef75997410f156630b21c12f2da210c6'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-07 00:59:01.783488 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1cb26ff7ef75997410f156630b21c12f2da210c6'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__1cb26ff7ef75997410f156630b21c12f2da210c6'}])  2026-03-07 00:59:01.783493 | orchestrator | 2026-03-07 00:59:01.783498 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-07 00:59:01.783502 | orchestrator | Saturday 07 March 2026 00:52:39 +0000 (0:00:15.748) 0:05:54.694 ******** 2026-03-07 00:59:01.783507 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.783512 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.783516 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.783521 | orchestrator | 2026-03-07 00:59:01.783525 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-07 00:59:01.783530 | orchestrator | Saturday 07 March 2026 00:52:39 +0000 (0:00:00.351) 0:05:55.046 ******** 2026-03-07 00:59:01.783534 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:59:01.783539 | orchestrator | 2026-03-07 00:59:01.783544 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-07 00:59:01.783548 | orchestrator | Saturday 07 March 2026 00:52:40 +0000 (0:00:00.880) 0:05:55.926 ******** 2026-03-07 00:59:01.783553 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.783557 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.783562 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.783566 | orchestrator | 2026-03-07 00:59:01.783571 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-07 00:59:01.783576 | orchestrator | Saturday 07 March 2026 00:52:41 +0000 (0:00:00.339) 0:05:56.266 ******** 2026-03-07 00:59:01.783580 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.783585 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.783589 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.783594 | orchestrator | 2026-03-07 00:59:01.783598 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-07 00:59:01.783603 | orchestrator | Saturday 07 March 2026 00:52:41 +0000 (0:00:00.378) 0:05:56.645 ******** 2026-03-07 00:59:01.783607 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-07 00:59:01.783612 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-07 00:59:01.783617 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-07 00:59:01.783621 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.783626 | orchestrator | 2026-03-07 00:59:01.783631 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-07 00:59:01.783635 | orchestrator | Saturday 07 March 2026 00:52:42 +0000 (0:00:01.366) 0:05:58.012 ******** 2026-03-07 00:59:01.783659 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.783664 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.783683 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.783689 | orchestrator | 2026-03-07 00:59:01.783693 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-03-07 00:59:01.783698 | orchestrator | 2026-03-07 00:59:01.783702 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-07 00:59:01.783707 | orchestrator | Saturday 07 March 2026 00:52:43 +0000 (0:00:00.683) 0:05:58.696 ******** 2026-03-07 00:59:01.783715 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:59:01.783720 | orchestrator | 2026-03-07 00:59:01.783725 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-07 00:59:01.783730 | orchestrator | Saturday 07 March 2026 00:52:44 +0000 (0:00:00.576) 0:05:59.272 ******** 2026-03-07 00:59:01.783734 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:59:01.783739 | orchestrator | 2026-03-07 00:59:01.783744 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-07 00:59:01.783751 | orchestrator | Saturday 07 March 2026 00:52:45 +0000 (0:00:00.955) 0:06:00.228 ******** 2026-03-07 00:59:01.783756 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.783760 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.783765 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.783770 | orchestrator | 2026-03-07 00:59:01.783774 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-07 00:59:01.783779 | orchestrator | Saturday 07 March 2026 00:52:46 +0000 (0:00:00.886) 0:06:01.114 ******** 2026-03-07 00:59:01.783784 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.783788 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.783793 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.783797 | orchestrator | 2026-03-07 00:59:01.783802 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-07 00:59:01.783807 | orchestrator | Saturday 07 March 2026 00:52:46 +0000 (0:00:00.376) 0:06:01.491 ******** 2026-03-07 00:59:01.783811 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.783816 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.783820 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.783825 | orchestrator | 2026-03-07 00:59:01.783829 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-07 00:59:01.783834 | orchestrator | Saturday 07 March 2026 00:52:47 +0000 (0:00:00.767) 0:06:02.259 ******** 2026-03-07 00:59:01.783839 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.783843 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.783866 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.783874 | orchestrator | 2026-03-07 00:59:01.783881 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-07 00:59:01.783887 | orchestrator | Saturday 07 March 2026 00:52:47 +0000 (0:00:00.361) 0:06:02.621 ******** 2026-03-07 00:59:01.783894 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.783900 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.783905 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.783909 | orchestrator | 2026-03-07 00:59:01.783914 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-07 00:59:01.783919 | orchestrator | Saturday 07 March 2026 00:52:48 +0000 (0:00:00.972) 0:06:03.593 ******** 2026-03-07 00:59:01.783923 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.783928 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.783932 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.783937 | orchestrator | 2026-03-07 00:59:01.783941 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-07 00:59:01.783946 | orchestrator | Saturday 07 March 2026 00:52:48 +0000 (0:00:00.347) 0:06:03.940 ******** 2026-03-07 00:59:01.783950 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.783955 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.783959 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.783964 | orchestrator | 2026-03-07 00:59:01.783968 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-07 00:59:01.783973 | orchestrator | Saturday 07 March 2026 00:52:49 +0000 (0:00:00.748) 0:06:04.688 ******** 2026-03-07 00:59:01.783978 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.783982 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.783990 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.783995 | orchestrator | 2026-03-07 00:59:01.783999 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-07 00:59:01.784004 | orchestrator | Saturday 07 March 2026 00:52:50 +0000 (0:00:00.857) 0:06:05.546 ******** 2026-03-07 00:59:01.784008 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.784013 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.784017 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.784022 | orchestrator | 2026-03-07 00:59:01.784026 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-07 00:59:01.784031 | orchestrator | Saturday 07 March 2026 00:52:51 +0000 (0:00:00.836) 0:06:06.383 ******** 2026-03-07 00:59:01.784035 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.784040 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.784044 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.784049 | orchestrator | 2026-03-07 00:59:01.784054 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-07 00:59:01.784058 | orchestrator | Saturday 07 March 2026 00:52:51 +0000 (0:00:00.341) 0:06:06.725 ******** 2026-03-07 00:59:01.784063 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.784067 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.784071 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.784076 | orchestrator | 2026-03-07 00:59:01.784080 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-07 00:59:01.784085 | orchestrator | Saturday 07 March 2026 00:52:52 +0000 (0:00:00.753) 0:06:07.478 ******** 2026-03-07 00:59:01.784089 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.784094 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.784099 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.784103 | orchestrator | 2026-03-07 00:59:01.784108 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-07 00:59:01.784129 | orchestrator | Saturday 07 March 2026 00:52:52 +0000 (0:00:00.346) 0:06:07.825 ******** 2026-03-07 00:59:01.784134 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.784138 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.784143 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.784148 | orchestrator | 2026-03-07 00:59:01.784152 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-07 00:59:01.784157 | orchestrator | Saturday 07 March 2026 00:52:53 +0000 (0:00:00.352) 0:06:08.177 ******** 2026-03-07 00:59:01.784161 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.784166 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.784170 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.784175 | orchestrator | 2026-03-07 00:59:01.784179 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-07 00:59:01.784184 | orchestrator | Saturday 07 March 2026 00:52:53 +0000 (0:00:00.416) 0:06:08.594 ******** 2026-03-07 00:59:01.784189 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.784193 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.784198 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.784202 | orchestrator | 2026-03-07 00:59:01.784207 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-07 00:59:01.784215 | orchestrator | Saturday 07 March 2026 00:52:53 +0000 (0:00:00.403) 0:06:08.998 ******** 2026-03-07 00:59:01.784219 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.784224 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.784228 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.784233 | orchestrator | 2026-03-07 00:59:01.784237 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-07 00:59:01.784242 | orchestrator | Saturday 07 March 2026 00:52:54 +0000 (0:00:00.776) 0:06:09.775 ******** 2026-03-07 00:59:01.784246 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.784251 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.784255 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.784260 | orchestrator | 2026-03-07 00:59:01.784268 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-07 00:59:01.784273 | orchestrator | Saturday 07 March 2026 00:52:55 +0000 (0:00:00.448) 0:06:10.223 ******** 2026-03-07 00:59:01.784277 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.784282 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.784286 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.784291 | orchestrator | 2026-03-07 00:59:01.784295 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-07 00:59:01.784300 | orchestrator | Saturday 07 March 2026 00:52:55 +0000 (0:00:00.438) 0:06:10.662 ******** 2026-03-07 00:59:01.784304 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.784309 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.784313 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.784318 | orchestrator | 2026-03-07 00:59:01.784322 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-07 00:59:01.784327 | orchestrator | Saturday 07 March 2026 00:52:56 +0000 (0:00:01.158) 0:06:11.821 ******** 2026-03-07 00:59:01.784331 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-07 00:59:01.784336 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-07 00:59:01.784341 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-07 00:59:01.784345 | orchestrator | 2026-03-07 00:59:01.784350 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-07 00:59:01.784354 | orchestrator | Saturday 07 March 2026 00:52:57 +0000 (0:00:00.770) 0:06:12.592 ******** 2026-03-07 00:59:01.784359 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:59:01.784363 | orchestrator | 2026-03-07 00:59:01.784368 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-03-07 00:59:01.784373 | orchestrator | Saturday 07 March 2026 00:52:58 +0000 (0:00:00.719) 0:06:13.311 ******** 2026-03-07 00:59:01.784377 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:59:01.784382 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:59:01.784386 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:59:01.784391 | orchestrator | 2026-03-07 00:59:01.784395 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-03-07 00:59:01.784400 | orchestrator | Saturday 07 March 2026 00:52:59 +0000 (0:00:00.872) 0:06:14.184 ******** 2026-03-07 00:59:01.784404 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.784409 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.784413 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.784418 | orchestrator | 2026-03-07 00:59:01.784422 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-03-07 00:59:01.784427 | orchestrator | Saturday 07 March 2026 00:52:59 +0000 (0:00:00.886) 0:06:15.071 ******** 2026-03-07 00:59:01.784431 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-07 00:59:01.784436 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-07 00:59:01.784440 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-07 00:59:01.784445 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-03-07 00:59:01.784450 | orchestrator | 2026-03-07 00:59:01.784454 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-03-07 00:59:01.784458 | orchestrator | Saturday 07 March 2026 00:53:11 +0000 (0:00:11.924) 0:06:26.996 ******** 2026-03-07 00:59:01.784463 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.784467 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.784472 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.784476 | orchestrator | 2026-03-07 00:59:01.784481 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-03-07 00:59:01.784486 | orchestrator | Saturday 07 March 2026 00:53:12 +0000 (0:00:00.452) 0:06:27.448 ******** 2026-03-07 00:59:01.784490 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-07 00:59:01.784495 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-07 00:59:01.784503 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-07 00:59:01.784507 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-07 00:59:01.784512 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-07 00:59:01.784531 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-07 00:59:01.784536 | orchestrator | 2026-03-07 00:59:01.784541 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-03-07 00:59:01.784545 | orchestrator | Saturday 07 March 2026 00:53:15 +0000 (0:00:02.752) 0:06:30.200 ******** 2026-03-07 00:59:01.784550 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-07 00:59:01.784555 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-07 00:59:01.784559 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-07 00:59:01.784564 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-07 00:59:01.784568 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-07 00:59:01.784573 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-07 00:59:01.784577 | orchestrator | 2026-03-07 00:59:01.784582 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-03-07 00:59:01.784587 | orchestrator | Saturday 07 March 2026 00:53:16 +0000 (0:00:01.272) 0:06:31.473 ******** 2026-03-07 00:59:01.784591 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.784596 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.784600 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.784605 | orchestrator | 2026-03-07 00:59:01.784613 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-03-07 00:59:01.784617 | orchestrator | Saturday 07 March 2026 00:53:17 +0000 (0:00:01.229) 0:06:32.703 ******** 2026-03-07 00:59:01.784622 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.784626 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.784631 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.784635 | orchestrator | 2026-03-07 00:59:01.784640 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-07 00:59:01.784645 | orchestrator | Saturday 07 March 2026 00:53:17 +0000 (0:00:00.332) 0:06:33.035 ******** 2026-03-07 00:59:01.784649 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.784653 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.784658 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.784663 | orchestrator | 2026-03-07 00:59:01.784667 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-07 00:59:01.784672 | orchestrator | Saturday 07 March 2026 00:53:18 +0000 (0:00:00.372) 0:06:33.408 ******** 2026-03-07 00:59:01.784676 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:59:01.784681 | orchestrator | 2026-03-07 00:59:01.784685 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-03-07 00:59:01.784690 | orchestrator | Saturday 07 March 2026 00:53:19 +0000 (0:00:00.826) 0:06:34.235 ******** 2026-03-07 00:59:01.784694 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.784699 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.784703 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.784708 | orchestrator | 2026-03-07 00:59:01.784712 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-03-07 00:59:01.784717 | orchestrator | Saturday 07 March 2026 00:53:19 +0000 (0:00:00.385) 0:06:34.621 ******** 2026-03-07 00:59:01.784721 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.784726 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.784731 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.784735 | orchestrator | 2026-03-07 00:59:01.784740 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-03-07 00:59:01.784744 | orchestrator | Saturday 07 March 2026 00:53:19 +0000 (0:00:00.362) 0:06:34.983 ******** 2026-03-07 00:59:01.784749 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:59:01.784757 | orchestrator | 2026-03-07 00:59:01.784762 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-03-07 00:59:01.784766 | orchestrator | Saturday 07 March 2026 00:53:20 +0000 (0:00:01.072) 0:06:36.056 ******** 2026-03-07 00:59:01.784771 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:59:01.784775 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:59:01.784780 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:59:01.784784 | orchestrator | 2026-03-07 00:59:01.784789 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-03-07 00:59:01.784793 | orchestrator | Saturday 07 March 2026 00:53:22 +0000 (0:00:01.389) 0:06:37.445 ******** 2026-03-07 00:59:01.784798 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:59:01.784802 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:59:01.784807 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:59:01.784811 | orchestrator | 2026-03-07 00:59:01.784816 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-03-07 00:59:01.784820 | orchestrator | Saturday 07 March 2026 00:53:23 +0000 (0:00:01.369) 0:06:38.815 ******** 2026-03-07 00:59:01.784825 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:59:01.784829 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:59:01.784834 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:59:01.784838 | orchestrator | 2026-03-07 00:59:01.784843 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-03-07 00:59:01.784876 | orchestrator | Saturday 07 March 2026 00:53:25 +0000 (0:00:01.989) 0:06:40.804 ******** 2026-03-07 00:59:01.784883 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:59:01.784887 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:59:01.784892 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:59:01.784896 | orchestrator | 2026-03-07 00:59:01.784901 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-07 00:59:01.784905 | orchestrator | Saturday 07 March 2026 00:53:28 +0000 (0:00:02.545) 0:06:43.350 ******** 2026-03-07 00:59:01.784910 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.784914 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.784919 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-03-07 00:59:01.784923 | orchestrator | 2026-03-07 00:59:01.784928 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-03-07 00:59:01.784932 | orchestrator | Saturday 07 March 2026 00:53:28 +0000 (0:00:00.516) 0:06:43.866 ******** 2026-03-07 00:59:01.784952 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-03-07 00:59:01.784958 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-03-07 00:59:01.784962 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-03-07 00:59:01.784967 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-03-07 00:59:01.784971 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-03-07 00:59:01.784976 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-07 00:59:01.784980 | orchestrator | 2026-03-07 00:59:01.784985 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-03-07 00:59:01.784990 | orchestrator | Saturday 07 March 2026 00:53:59 +0000 (0:00:30.755) 0:07:14.622 ******** 2026-03-07 00:59:01.784994 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-07 00:59:01.784999 | orchestrator | 2026-03-07 00:59:01.785006 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-03-07 00:59:01.785011 | orchestrator | Saturday 07 March 2026 00:54:00 +0000 (0:00:01.356) 0:07:15.978 ******** 2026-03-07 00:59:01.785016 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.785024 | orchestrator | 2026-03-07 00:59:01.785029 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-03-07 00:59:01.785033 | orchestrator | Saturday 07 March 2026 00:54:01 +0000 (0:00:00.349) 0:07:16.328 ******** 2026-03-07 00:59:01.785038 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.785042 | orchestrator | 2026-03-07 00:59:01.785047 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-03-07 00:59:01.785052 | orchestrator | Saturday 07 March 2026 00:54:01 +0000 (0:00:00.168) 0:07:16.496 ******** 2026-03-07 00:59:01.785056 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-03-07 00:59:01.785061 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-03-07 00:59:01.785065 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-03-07 00:59:01.785070 | orchestrator | 2026-03-07 00:59:01.785074 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-03-07 00:59:01.785079 | orchestrator | Saturday 07 March 2026 00:54:08 +0000 (0:00:06.708) 0:07:23.205 ******** 2026-03-07 00:59:01.785083 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-03-07 00:59:01.785088 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-03-07 00:59:01.785092 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-03-07 00:59:01.785097 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-03-07 00:59:01.785101 | orchestrator | 2026-03-07 00:59:01.785106 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-07 00:59:01.785111 | orchestrator | Saturday 07 March 2026 00:54:13 +0000 (0:00:05.405) 0:07:28.611 ******** 2026-03-07 00:59:01.785115 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:59:01.785120 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:59:01.785124 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:59:01.785129 | orchestrator | 2026-03-07 00:59:01.785133 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-07 00:59:01.785138 | orchestrator | Saturday 07 March 2026 00:54:14 +0000 (0:00:00.660) 0:07:29.271 ******** 2026-03-07 00:59:01.785142 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:59:01.785147 | orchestrator | 2026-03-07 00:59:01.785151 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-07 00:59:01.785156 | orchestrator | Saturday 07 March 2026 00:54:14 +0000 (0:00:00.663) 0:07:29.935 ******** 2026-03-07 00:59:01.785161 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.785165 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.785170 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.785174 | orchestrator | 2026-03-07 00:59:01.785179 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-07 00:59:01.785183 | orchestrator | Saturday 07 March 2026 00:54:15 +0000 (0:00:00.321) 0:07:30.257 ******** 2026-03-07 00:59:01.785188 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:59:01.785192 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:59:01.785197 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:59:01.785201 | orchestrator | 2026-03-07 00:59:01.785206 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-07 00:59:01.785210 | orchestrator | Saturday 07 March 2026 00:54:16 +0000 (0:00:01.387) 0:07:31.644 ******** 2026-03-07 00:59:01.785215 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-07 00:59:01.785219 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-07 00:59:01.785224 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-07 00:59:01.785229 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.785233 | orchestrator | 2026-03-07 00:59:01.785238 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-07 00:59:01.785242 | orchestrator | Saturday 07 March 2026 00:54:17 +0000 (0:00:00.769) 0:07:32.414 ******** 2026-03-07 00:59:01.785250 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.785254 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.785259 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.785264 | orchestrator | 2026-03-07 00:59:01.785268 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-03-07 00:59:01.785273 | orchestrator | 2026-03-07 00:59:01.785277 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-07 00:59:01.785282 | orchestrator | Saturday 07 March 2026 00:54:18 +0000 (0:00:00.758) 0:07:33.172 ******** 2026-03-07 00:59:01.785300 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:59:01.785305 | orchestrator | 2026-03-07 00:59:01.785310 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-07 00:59:01.785314 | orchestrator | Saturday 07 March 2026 00:54:18 +0000 (0:00:00.513) 0:07:33.685 ******** 2026-03-07 00:59:01.785319 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:59:01.785324 | orchestrator | 2026-03-07 00:59:01.785328 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-07 00:59:01.785333 | orchestrator | Saturday 07 March 2026 00:54:19 +0000 (0:00:00.673) 0:07:34.358 ******** 2026-03-07 00:59:01.785337 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.785342 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.785346 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.785351 | orchestrator | 2026-03-07 00:59:01.785355 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-07 00:59:01.785363 | orchestrator | Saturday 07 March 2026 00:54:19 +0000 (0:00:00.276) 0:07:34.634 ******** 2026-03-07 00:59:01.785368 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.785372 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.785377 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.785381 | orchestrator | 2026-03-07 00:59:01.785386 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-07 00:59:01.785390 | orchestrator | Saturday 07 March 2026 00:54:20 +0000 (0:00:00.692) 0:07:35.327 ******** 2026-03-07 00:59:01.785395 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.785399 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.785404 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.785408 | orchestrator | 2026-03-07 00:59:01.785413 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-07 00:59:01.785418 | orchestrator | Saturday 07 March 2026 00:54:20 +0000 (0:00:00.742) 0:07:36.069 ******** 2026-03-07 00:59:01.785422 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.785427 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.785431 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.785436 | orchestrator | 2026-03-07 00:59:01.785440 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-07 00:59:01.785445 | orchestrator | Saturday 07 March 2026 00:54:21 +0000 (0:00:00.919) 0:07:36.989 ******** 2026-03-07 00:59:01.785449 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.785454 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.785459 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.785463 | orchestrator | 2026-03-07 00:59:01.785468 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-07 00:59:01.785472 | orchestrator | Saturday 07 March 2026 00:54:22 +0000 (0:00:00.317) 0:07:37.306 ******** 2026-03-07 00:59:01.785477 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.785482 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.785486 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.785491 | orchestrator | 2026-03-07 00:59:01.785495 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-07 00:59:01.785500 | orchestrator | Saturday 07 March 2026 00:54:22 +0000 (0:00:00.294) 0:07:37.601 ******** 2026-03-07 00:59:01.785507 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.785512 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.785516 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.785521 | orchestrator | 2026-03-07 00:59:01.785525 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-07 00:59:01.785530 | orchestrator | Saturday 07 March 2026 00:54:22 +0000 (0:00:00.288) 0:07:37.890 ******** 2026-03-07 00:59:01.785535 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.785539 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.785544 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.785548 | orchestrator | 2026-03-07 00:59:01.785553 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-07 00:59:01.785557 | orchestrator | Saturday 07 March 2026 00:54:23 +0000 (0:00:01.057) 0:07:38.947 ******** 2026-03-07 00:59:01.785562 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.785566 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.785571 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.785575 | orchestrator | 2026-03-07 00:59:01.785580 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-07 00:59:01.785585 | orchestrator | Saturday 07 March 2026 00:54:24 +0000 (0:00:00.732) 0:07:39.679 ******** 2026-03-07 00:59:01.785589 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.785594 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.785598 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.785603 | orchestrator | 2026-03-07 00:59:01.785607 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-07 00:59:01.785612 | orchestrator | Saturday 07 March 2026 00:54:24 +0000 (0:00:00.358) 0:07:40.037 ******** 2026-03-07 00:59:01.785617 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.785621 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.785625 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.785630 | orchestrator | 2026-03-07 00:59:01.785635 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-07 00:59:01.785639 | orchestrator | Saturday 07 March 2026 00:54:25 +0000 (0:00:00.275) 0:07:40.313 ******** 2026-03-07 00:59:01.785644 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.785648 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.785653 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.785657 | orchestrator | 2026-03-07 00:59:01.785662 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-07 00:59:01.785667 | orchestrator | Saturday 07 March 2026 00:54:25 +0000 (0:00:00.666) 0:07:40.979 ******** 2026-03-07 00:59:01.785671 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.785676 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.785680 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.785685 | orchestrator | 2026-03-07 00:59:01.785689 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-07 00:59:01.785694 | orchestrator | Saturday 07 March 2026 00:54:26 +0000 (0:00:00.334) 0:07:41.314 ******** 2026-03-07 00:59:01.785699 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.785703 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.785721 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.785726 | orchestrator | 2026-03-07 00:59:01.785731 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-07 00:59:01.785735 | orchestrator | Saturday 07 March 2026 00:54:26 +0000 (0:00:00.322) 0:07:41.636 ******** 2026-03-07 00:59:01.785740 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.785745 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.785749 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.785754 | orchestrator | 2026-03-07 00:59:01.785758 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-07 00:59:01.785763 | orchestrator | Saturday 07 March 2026 00:54:26 +0000 (0:00:00.281) 0:07:41.918 ******** 2026-03-07 00:59:01.785767 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.785772 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.785784 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.785788 | orchestrator | 2026-03-07 00:59:01.785793 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-07 00:59:01.785798 | orchestrator | Saturday 07 March 2026 00:54:27 +0000 (0:00:00.520) 0:07:42.438 ******** 2026-03-07 00:59:01.785802 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.785810 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.785814 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.785819 | orchestrator | 2026-03-07 00:59:01.785823 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-07 00:59:01.785828 | orchestrator | Saturday 07 March 2026 00:54:27 +0000 (0:00:00.354) 0:07:42.792 ******** 2026-03-07 00:59:01.785833 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.785837 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.785842 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.785846 | orchestrator | 2026-03-07 00:59:01.785865 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-07 00:59:01.785870 | orchestrator | Saturday 07 March 2026 00:54:28 +0000 (0:00:00.373) 0:07:43.166 ******** 2026-03-07 00:59:01.785875 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.785879 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.785884 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.785888 | orchestrator | 2026-03-07 00:59:01.785893 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-03-07 00:59:01.785897 | orchestrator | Saturday 07 March 2026 00:54:28 +0000 (0:00:00.757) 0:07:43.923 ******** 2026-03-07 00:59:01.785902 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.785907 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.785911 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.785916 | orchestrator | 2026-03-07 00:59:01.785920 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-03-07 00:59:01.785925 | orchestrator | Saturday 07 March 2026 00:54:29 +0000 (0:00:00.318) 0:07:44.241 ******** 2026-03-07 00:59:01.785929 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-07 00:59:01.785934 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-07 00:59:01.785939 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-07 00:59:01.785943 | orchestrator | 2026-03-07 00:59:01.785948 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-03-07 00:59:01.785952 | orchestrator | Saturday 07 March 2026 00:54:29 +0000 (0:00:00.632) 0:07:44.874 ******** 2026-03-07 00:59:01.785957 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:59:01.785962 | orchestrator | 2026-03-07 00:59:01.785966 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-03-07 00:59:01.785971 | orchestrator | Saturday 07 March 2026 00:54:30 +0000 (0:00:00.506) 0:07:45.380 ******** 2026-03-07 00:59:01.785975 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.785980 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.785984 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.785989 | orchestrator | 2026-03-07 00:59:01.785993 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-03-07 00:59:01.785998 | orchestrator | Saturday 07 March 2026 00:54:30 +0000 (0:00:00.599) 0:07:45.979 ******** 2026-03-07 00:59:01.786003 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.786007 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.786012 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.786048 | orchestrator | 2026-03-07 00:59:01.786053 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-03-07 00:59:01.786058 | orchestrator | Saturday 07 March 2026 00:54:31 +0000 (0:00:00.308) 0:07:46.288 ******** 2026-03-07 00:59:01.786062 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.786067 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.786076 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.786080 | orchestrator | 2026-03-07 00:59:01.786085 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-03-07 00:59:01.786090 | orchestrator | Saturday 07 March 2026 00:54:31 +0000 (0:00:00.646) 0:07:46.935 ******** 2026-03-07 00:59:01.786094 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.786099 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.786103 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.786108 | orchestrator | 2026-03-07 00:59:01.786113 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-03-07 00:59:01.786117 | orchestrator | Saturday 07 March 2026 00:54:32 +0000 (0:00:00.409) 0:07:47.344 ******** 2026-03-07 00:59:01.786122 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-07 00:59:01.786126 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-07 00:59:01.786131 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-07 00:59:01.786136 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-07 00:59:01.786140 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-07 00:59:01.786149 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-07 00:59:01.786153 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-07 00:59:01.786158 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-07 00:59:01.786162 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-07 00:59:01.786167 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-07 00:59:01.786171 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-07 00:59:01.786176 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-07 00:59:01.786180 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-07 00:59:01.786185 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-07 00:59:01.786192 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-07 00:59:01.786197 | orchestrator | 2026-03-07 00:59:01.786202 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-03-07 00:59:01.786206 | orchestrator | Saturday 07 March 2026 00:54:36 +0000 (0:00:03.768) 0:07:51.113 ******** 2026-03-07 00:59:01.786211 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.786215 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.786220 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.786224 | orchestrator | 2026-03-07 00:59:01.786229 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-03-07 00:59:01.786233 | orchestrator | Saturday 07 March 2026 00:54:36 +0000 (0:00:00.437) 0:07:51.550 ******** 2026-03-07 00:59:01.786238 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:59:01.786242 | orchestrator | 2026-03-07 00:59:01.786247 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-03-07 00:59:01.786252 | orchestrator | Saturday 07 March 2026 00:54:37 +0000 (0:00:00.650) 0:07:52.201 ******** 2026-03-07 00:59:01.786256 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-07 00:59:01.786260 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-07 00:59:01.786265 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-03-07 00:59:01.786270 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-03-07 00:59:01.786274 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-07 00:59:01.786282 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-03-07 00:59:01.786287 | orchestrator | 2026-03-07 00:59:01.786292 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-03-07 00:59:01.786296 | orchestrator | Saturday 07 March 2026 00:54:38 +0000 (0:00:01.613) 0:07:53.814 ******** 2026-03-07 00:59:01.786301 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-07 00:59:01.786305 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-07 00:59:01.786310 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-07 00:59:01.786314 | orchestrator | 2026-03-07 00:59:01.786319 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-03-07 00:59:01.786323 | orchestrator | Saturday 07 March 2026 00:54:41 +0000 (0:00:02.380) 0:07:56.195 ******** 2026-03-07 00:59:01.786328 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-07 00:59:01.786332 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-07 00:59:01.786337 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:59:01.786341 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-07 00:59:01.786346 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-07 00:59:01.786350 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:59:01.786355 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-07 00:59:01.786359 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-07 00:59:01.786364 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:59:01.786368 | orchestrator | 2026-03-07 00:59:01.786373 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-03-07 00:59:01.786377 | orchestrator | Saturday 07 March 2026 00:54:42 +0000 (0:00:01.140) 0:07:57.336 ******** 2026-03-07 00:59:01.786382 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-07 00:59:01.786386 | orchestrator | 2026-03-07 00:59:01.786391 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-03-07 00:59:01.786395 | orchestrator | Saturday 07 March 2026 00:54:44 +0000 (0:00:02.278) 0:07:59.614 ******** 2026-03-07 00:59:01.786400 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-4, testbed-node-3, testbed-node-5 2026-03-07 00:59:01.786405 | orchestrator | 2026-03-07 00:59:01.786409 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-03-07 00:59:01.786413 | orchestrator | Saturday 07 March 2026 00:54:45 +0000 (0:00:00.719) 0:08:00.333 ******** 2026-03-07 00:59:01.786418 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-6dc70d00-a24c-54e3-88f7-ca23e2f9592d', 'data_vg': 'ceph-6dc70d00-a24c-54e3-88f7-ca23e2f9592d'}) 2026-03-07 00:59:01.786424 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-030f8481-3d62-5800-8c17-c22bf68268ab', 'data_vg': 'ceph-030f8481-3d62-5800-8c17-c22bf68268ab'}) 2026-03-07 00:59:01.786435 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-3529c73b-8337-5a09-bb85-f9958b3a6115', 'data_vg': 'ceph-3529c73b-8337-5a09-bb85-f9958b3a6115'}) 2026-03-07 00:59:01.786440 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-3960461f-aa79-5447-98f8-9395cd95d2e3', 'data_vg': 'ceph-3960461f-aa79-5447-98f8-9395cd95d2e3'}) 2026-03-07 00:59:01.786444 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-8595c920-fb8d-5336-8a83-206e7467f719', 'data_vg': 'ceph-8595c920-fb8d-5336-8a83-206e7467f719'}) 2026-03-07 00:59:01.786449 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-5644fa9a-696a-5a4b-ae2f-cbc58e712aba', 'data_vg': 'ceph-5644fa9a-696a-5a4b-ae2f-cbc58e712aba'}) 2026-03-07 00:59:01.786453 | orchestrator | 2026-03-07 00:59:01.786458 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-03-07 00:59:01.786462 | orchestrator | Saturday 07 March 2026 00:55:27 +0000 (0:00:42.038) 0:08:42.371 ******** 2026-03-07 00:59:01.786467 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.786477 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.786481 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.786486 | orchestrator | 2026-03-07 00:59:01.786493 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-03-07 00:59:01.786498 | orchestrator | Saturday 07 March 2026 00:55:27 +0000 (0:00:00.347) 0:08:42.718 ******** 2026-03-07 00:59:01.786503 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:59:01.786507 | orchestrator | 2026-03-07 00:59:01.786512 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-03-07 00:59:01.786516 | orchestrator | Saturday 07 March 2026 00:55:28 +0000 (0:00:00.817) 0:08:43.536 ******** 2026-03-07 00:59:01.786521 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.786525 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.786530 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.786535 | orchestrator | 2026-03-07 00:59:01.786539 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-03-07 00:59:01.786544 | orchestrator | Saturday 07 March 2026 00:55:29 +0000 (0:00:00.699) 0:08:44.236 ******** 2026-03-07 00:59:01.786548 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.786553 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.786557 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.786562 | orchestrator | 2026-03-07 00:59:01.786567 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-03-07 00:59:01.786571 | orchestrator | Saturday 07 March 2026 00:55:31 +0000 (0:00:02.684) 0:08:46.920 ******** 2026-03-07 00:59:01.786576 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:59:01.786580 | orchestrator | 2026-03-07 00:59:01.786585 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-03-07 00:59:01.786589 | orchestrator | Saturday 07 March 2026 00:55:32 +0000 (0:00:00.845) 0:08:47.765 ******** 2026-03-07 00:59:01.786594 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:59:01.786598 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:59:01.786603 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:59:01.786608 | orchestrator | 2026-03-07 00:59:01.786612 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-03-07 00:59:01.786617 | orchestrator | Saturday 07 March 2026 00:55:33 +0000 (0:00:01.236) 0:08:49.002 ******** 2026-03-07 00:59:01.786621 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:59:01.786626 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:59:01.786630 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:59:01.786635 | orchestrator | 2026-03-07 00:59:01.786639 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-03-07 00:59:01.786644 | orchestrator | Saturday 07 March 2026 00:55:35 +0000 (0:00:01.233) 0:08:50.236 ******** 2026-03-07 00:59:01.786648 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:59:01.786653 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:59:01.786658 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:59:01.786662 | orchestrator | 2026-03-07 00:59:01.786667 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-03-07 00:59:01.786671 | orchestrator | Saturday 07 March 2026 00:55:37 +0000 (0:00:02.838) 0:08:53.074 ******** 2026-03-07 00:59:01.786676 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.786680 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.786685 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.786690 | orchestrator | 2026-03-07 00:59:01.786694 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-03-07 00:59:01.786699 | orchestrator | Saturday 07 March 2026 00:55:38 +0000 (0:00:00.704) 0:08:53.779 ******** 2026-03-07 00:59:01.786703 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.786708 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.786712 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.786717 | orchestrator | 2026-03-07 00:59:01.786725 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-03-07 00:59:01.786729 | orchestrator | Saturday 07 March 2026 00:55:39 +0000 (0:00:00.373) 0:08:54.153 ******** 2026-03-07 00:59:01.786734 | orchestrator | ok: [testbed-node-3] => (item=1) 2026-03-07 00:59:01.786739 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-07 00:59:01.786743 | orchestrator | ok: [testbed-node-5] => (item=3) 2026-03-07 00:59:01.786748 | orchestrator | ok: [testbed-node-3] => (item=5) 2026-03-07 00:59:01.786752 | orchestrator | ok: [testbed-node-4] => (item=4) 2026-03-07 00:59:01.786757 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-03-07 00:59:01.786761 | orchestrator | 2026-03-07 00:59:01.786766 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-03-07 00:59:01.786770 | orchestrator | Saturday 07 March 2026 00:55:40 +0000 (0:00:01.088) 0:08:55.241 ******** 2026-03-07 00:59:01.786775 | orchestrator | changed: [testbed-node-3] => (item=1) 2026-03-07 00:59:01.786779 | orchestrator | changed: [testbed-node-4] => (item=0) 2026-03-07 00:59:01.786784 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-03-07 00:59:01.786789 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-03-07 00:59:01.786793 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-03-07 00:59:01.786800 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-03-07 00:59:01.786805 | orchestrator | 2026-03-07 00:59:01.786810 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-03-07 00:59:01.786814 | orchestrator | Saturday 07 March 2026 00:55:42 +0000 (0:00:02.296) 0:08:57.538 ******** 2026-03-07 00:59:01.786819 | orchestrator | changed: [testbed-node-3] => (item=1) 2026-03-07 00:59:01.786823 | orchestrator | changed: [testbed-node-4] => (item=0) 2026-03-07 00:59:01.786828 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-03-07 00:59:01.786832 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-03-07 00:59:01.786837 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-03-07 00:59:01.786841 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-03-07 00:59:01.786846 | orchestrator | 2026-03-07 00:59:01.786865 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-03-07 00:59:01.786870 | orchestrator | Saturday 07 March 2026 00:55:46 +0000 (0:00:04.073) 0:09:01.611 ******** 2026-03-07 00:59:01.786875 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.786879 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.786887 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-07 00:59:01.786891 | orchestrator | 2026-03-07 00:59:01.786896 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-03-07 00:59:01.786900 | orchestrator | Saturday 07 March 2026 00:55:49 +0000 (0:00:03.391) 0:09:05.003 ******** 2026-03-07 00:59:01.786905 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.786909 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.786914 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-03-07 00:59:01.786919 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-07 00:59:01.786923 | orchestrator | 2026-03-07 00:59:01.786928 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-03-07 00:59:01.786932 | orchestrator | Saturday 07 March 2026 00:56:02 +0000 (0:00:12.763) 0:09:17.766 ******** 2026-03-07 00:59:01.786937 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.786942 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.786946 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.786951 | orchestrator | 2026-03-07 00:59:01.786955 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-07 00:59:01.786960 | orchestrator | Saturday 07 March 2026 00:56:04 +0000 (0:00:01.313) 0:09:19.079 ******** 2026-03-07 00:59:01.786964 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.786969 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.786973 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.786981 | orchestrator | 2026-03-07 00:59:01.786986 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-07 00:59:01.786990 | orchestrator | Saturday 07 March 2026 00:56:04 +0000 (0:00:00.442) 0:09:19.522 ******** 2026-03-07 00:59:01.786995 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:59:01.786999 | orchestrator | 2026-03-07 00:59:01.787004 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-07 00:59:01.787009 | orchestrator | Saturday 07 March 2026 00:56:05 +0000 (0:00:00.901) 0:09:20.423 ******** 2026-03-07 00:59:01.787013 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-07 00:59:01.787018 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-07 00:59:01.787022 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-07 00:59:01.787027 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.787031 | orchestrator | 2026-03-07 00:59:01.787036 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-07 00:59:01.787040 | orchestrator | Saturday 07 March 2026 00:56:05 +0000 (0:00:00.432) 0:09:20.856 ******** 2026-03-07 00:59:01.787045 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.787049 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.787054 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.787058 | orchestrator | 2026-03-07 00:59:01.787063 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-07 00:59:01.787068 | orchestrator | Saturday 07 March 2026 00:56:06 +0000 (0:00:00.403) 0:09:21.260 ******** 2026-03-07 00:59:01.787072 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.787077 | orchestrator | 2026-03-07 00:59:01.787081 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-07 00:59:01.787086 | orchestrator | Saturday 07 March 2026 00:56:06 +0000 (0:00:00.258) 0:09:21.519 ******** 2026-03-07 00:59:01.787090 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.787095 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.787099 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.787104 | orchestrator | 2026-03-07 00:59:01.787108 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-07 00:59:01.787113 | orchestrator | Saturday 07 March 2026 00:56:06 +0000 (0:00:00.368) 0:09:21.887 ******** 2026-03-07 00:59:01.787118 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.787122 | orchestrator | 2026-03-07 00:59:01.787127 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-07 00:59:01.787131 | orchestrator | Saturday 07 March 2026 00:56:07 +0000 (0:00:00.283) 0:09:22.171 ******** 2026-03-07 00:59:01.787136 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.787140 | orchestrator | 2026-03-07 00:59:01.787145 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-07 00:59:01.787149 | orchestrator | Saturday 07 March 2026 00:56:07 +0000 (0:00:00.260) 0:09:22.432 ******** 2026-03-07 00:59:01.787154 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.787158 | orchestrator | 2026-03-07 00:59:01.787163 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-07 00:59:01.787167 | orchestrator | Saturday 07 March 2026 00:56:07 +0000 (0:00:00.138) 0:09:22.571 ******** 2026-03-07 00:59:01.787172 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.787176 | orchestrator | 2026-03-07 00:59:01.787184 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-07 00:59:01.787188 | orchestrator | Saturday 07 March 2026 00:56:08 +0000 (0:00:00.979) 0:09:23.550 ******** 2026-03-07 00:59:01.787193 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.787198 | orchestrator | 2026-03-07 00:59:01.787202 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-07 00:59:01.787207 | orchestrator | Saturday 07 March 2026 00:56:08 +0000 (0:00:00.288) 0:09:23.838 ******** 2026-03-07 00:59:01.787211 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-07 00:59:01.787219 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-07 00:59:01.787224 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-07 00:59:01.787229 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.787233 | orchestrator | 2026-03-07 00:59:01.787237 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-07 00:59:01.787242 | orchestrator | Saturday 07 March 2026 00:56:09 +0000 (0:00:00.519) 0:09:24.357 ******** 2026-03-07 00:59:01.787247 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.787251 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.787256 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.787260 | orchestrator | 2026-03-07 00:59:01.787267 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-07 00:59:01.787272 | orchestrator | Saturday 07 March 2026 00:56:09 +0000 (0:00:00.354) 0:09:24.712 ******** 2026-03-07 00:59:01.787276 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.787281 | orchestrator | 2026-03-07 00:59:01.787286 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-07 00:59:01.787290 | orchestrator | Saturday 07 March 2026 00:56:09 +0000 (0:00:00.241) 0:09:24.953 ******** 2026-03-07 00:59:01.787295 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.787299 | orchestrator | 2026-03-07 00:59:01.787304 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-03-07 00:59:01.787308 | orchestrator | 2026-03-07 00:59:01.787313 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-07 00:59:01.787317 | orchestrator | Saturday 07 March 2026 00:56:10 +0000 (0:00:01.111) 0:09:26.065 ******** 2026-03-07 00:59:01.787322 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:59:01.787327 | orchestrator | 2026-03-07 00:59:01.787332 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-07 00:59:01.787336 | orchestrator | Saturday 07 March 2026 00:56:12 +0000 (0:00:01.063) 0:09:27.128 ******** 2026-03-07 00:59:01.787341 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:59:01.787346 | orchestrator | 2026-03-07 00:59:01.787350 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-07 00:59:01.787355 | orchestrator | Saturday 07 March 2026 00:56:13 +0000 (0:00:01.319) 0:09:28.448 ******** 2026-03-07 00:59:01.787359 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.787364 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.787368 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.787373 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.787378 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.787382 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.787387 | orchestrator | 2026-03-07 00:59:01.787391 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-07 00:59:01.787396 | orchestrator | Saturday 07 March 2026 00:56:14 +0000 (0:00:01.020) 0:09:29.468 ******** 2026-03-07 00:59:01.787401 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.787405 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.787410 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.787414 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.787419 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.787423 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.787428 | orchestrator | 2026-03-07 00:59:01.787432 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-07 00:59:01.787437 | orchestrator | Saturday 07 March 2026 00:56:15 +0000 (0:00:00.684) 0:09:30.153 ******** 2026-03-07 00:59:01.787442 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.787446 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.787454 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.787458 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.787463 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.787467 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.787472 | orchestrator | 2026-03-07 00:59:01.787477 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-07 00:59:01.787481 | orchestrator | Saturday 07 March 2026 00:56:16 +0000 (0:00:00.963) 0:09:31.116 ******** 2026-03-07 00:59:01.787486 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.787490 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.787495 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.787499 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.787504 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.787508 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.787513 | orchestrator | 2026-03-07 00:59:01.787517 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-07 00:59:01.787522 | orchestrator | Saturday 07 March 2026 00:56:16 +0000 (0:00:00.699) 0:09:31.816 ******** 2026-03-07 00:59:01.787526 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.787531 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.787536 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.787540 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.787545 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.787549 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.787554 | orchestrator | 2026-03-07 00:59:01.787558 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-07 00:59:01.787563 | orchestrator | Saturday 07 March 2026 00:56:17 +0000 (0:00:01.188) 0:09:33.005 ******** 2026-03-07 00:59:01.787567 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.787572 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.787579 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.787584 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.787589 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.787593 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.787598 | orchestrator | 2026-03-07 00:59:01.787602 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-07 00:59:01.787607 | orchestrator | Saturday 07 March 2026 00:56:18 +0000 (0:00:00.665) 0:09:33.671 ******** 2026-03-07 00:59:01.787611 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.787616 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.787620 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.787625 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.787629 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.787634 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.787638 | orchestrator | 2026-03-07 00:59:01.787643 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-07 00:59:01.787647 | orchestrator | Saturday 07 March 2026 00:56:19 +0000 (0:00:00.801) 0:09:34.473 ******** 2026-03-07 00:59:01.787652 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.787656 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.787661 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.787665 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.787672 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.787677 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.787681 | orchestrator | 2026-03-07 00:59:01.787686 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-07 00:59:01.787691 | orchestrator | Saturday 07 March 2026 00:56:20 +0000 (0:00:01.187) 0:09:35.660 ******** 2026-03-07 00:59:01.787695 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.787700 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.787704 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.787708 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.787713 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.787717 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.787722 | orchestrator | 2026-03-07 00:59:01.787726 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-07 00:59:01.787735 | orchestrator | Saturday 07 March 2026 00:56:21 +0000 (0:00:01.364) 0:09:37.025 ******** 2026-03-07 00:59:01.787739 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.787744 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.787748 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.787753 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.787757 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.787762 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.787766 | orchestrator | 2026-03-07 00:59:01.787771 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-07 00:59:01.787775 | orchestrator | Saturday 07 March 2026 00:56:22 +0000 (0:00:00.651) 0:09:37.676 ******** 2026-03-07 00:59:01.787780 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.787784 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.787789 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.787793 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.787798 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.787802 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.787807 | orchestrator | 2026-03-07 00:59:01.787812 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-07 00:59:01.787816 | orchestrator | Saturday 07 March 2026 00:56:23 +0000 (0:00:00.790) 0:09:38.467 ******** 2026-03-07 00:59:01.787821 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.787825 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.787830 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.787834 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.787839 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.787843 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.787868 | orchestrator | 2026-03-07 00:59:01.787877 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-07 00:59:01.787884 | orchestrator | Saturday 07 March 2026 00:56:23 +0000 (0:00:00.541) 0:09:39.009 ******** 2026-03-07 00:59:01.787893 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.787901 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.787908 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.787916 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.787922 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.787927 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.787932 | orchestrator | 2026-03-07 00:59:01.787936 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-07 00:59:01.787941 | orchestrator | Saturday 07 March 2026 00:56:24 +0000 (0:00:00.765) 0:09:39.774 ******** 2026-03-07 00:59:01.787945 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.787950 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.787954 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.787959 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.787964 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.787968 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.787973 | orchestrator | 2026-03-07 00:59:01.787978 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-07 00:59:01.787982 | orchestrator | Saturday 07 March 2026 00:56:25 +0000 (0:00:00.625) 0:09:40.400 ******** 2026-03-07 00:59:01.787987 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.787991 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.787996 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.788000 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.788005 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.788009 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.788014 | orchestrator | 2026-03-07 00:59:01.788019 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-07 00:59:01.788023 | orchestrator | Saturday 07 March 2026 00:56:26 +0000 (0:00:00.709) 0:09:41.109 ******** 2026-03-07 00:59:01.788028 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.788039 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.788044 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.788048 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:01.788053 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:01.788057 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:01.788062 | orchestrator | 2026-03-07 00:59:01.788066 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-07 00:59:01.788071 | orchestrator | Saturday 07 March 2026 00:56:26 +0000 (0:00:00.574) 0:09:41.684 ******** 2026-03-07 00:59:01.788079 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.788084 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.788088 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.788093 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.788097 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.788102 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.788107 | orchestrator | 2026-03-07 00:59:01.788111 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-07 00:59:01.788116 | orchestrator | Saturday 07 March 2026 00:56:27 +0000 (0:00:00.756) 0:09:42.440 ******** 2026-03-07 00:59:01.788120 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.788125 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.788129 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.788134 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.788138 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.788143 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.788148 | orchestrator | 2026-03-07 00:59:01.788152 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-07 00:59:01.788157 | orchestrator | Saturday 07 March 2026 00:56:27 +0000 (0:00:00.588) 0:09:43.028 ******** 2026-03-07 00:59:01.788161 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.788166 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.788171 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.788178 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.788183 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.788187 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.788192 | orchestrator | 2026-03-07 00:59:01.788196 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-03-07 00:59:01.788201 | orchestrator | Saturday 07 March 2026 00:56:29 +0000 (0:00:01.240) 0:09:44.269 ******** 2026-03-07 00:59:01.788206 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-07 00:59:01.788210 | orchestrator | 2026-03-07 00:59:01.788215 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-03-07 00:59:01.788219 | orchestrator | Saturday 07 March 2026 00:56:33 +0000 (0:00:04.628) 0:09:48.898 ******** 2026-03-07 00:59:01.788224 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-07 00:59:01.788229 | orchestrator | 2026-03-07 00:59:01.788233 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-03-07 00:59:01.788238 | orchestrator | Saturday 07 March 2026 00:56:36 +0000 (0:00:02.210) 0:09:51.108 ******** 2026-03-07 00:59:01.788242 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:59:01.788247 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:59:01.788251 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:59:01.788256 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.788261 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:59:01.788266 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:59:01.788271 | orchestrator | 2026-03-07 00:59:01.788275 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-03-07 00:59:01.788280 | orchestrator | Saturday 07 March 2026 00:56:38 +0000 (0:00:01.986) 0:09:53.094 ******** 2026-03-07 00:59:01.788285 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:59:01.788290 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:59:01.788295 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:59:01.788300 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:59:01.788304 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:59:01.788314 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:59:01.788318 | orchestrator | 2026-03-07 00:59:01.788323 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-03-07 00:59:01.788328 | orchestrator | Saturday 07 March 2026 00:56:38 +0000 (0:00:00.930) 0:09:54.024 ******** 2026-03-07 00:59:01.788333 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:59:01.788339 | orchestrator | 2026-03-07 00:59:01.788344 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-03-07 00:59:01.788349 | orchestrator | Saturday 07 March 2026 00:56:40 +0000 (0:00:01.157) 0:09:55.182 ******** 2026-03-07 00:59:01.788354 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:59:01.788359 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:59:01.788363 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:59:01.788368 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:59:01.788373 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:59:01.788378 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:59:01.788383 | orchestrator | 2026-03-07 00:59:01.788387 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-03-07 00:59:01.788392 | orchestrator | Saturday 07 March 2026 00:56:41 +0000 (0:00:01.672) 0:09:56.855 ******** 2026-03-07 00:59:01.788397 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:59:01.788402 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:59:01.788407 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:59:01.788411 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:59:01.788416 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:59:01.788421 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:59:01.788426 | orchestrator | 2026-03-07 00:59:01.788431 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-03-07 00:59:01.788436 | orchestrator | Saturday 07 March 2026 00:56:45 +0000 (0:00:03.509) 0:10:00.365 ******** 2026-03-07 00:59:01.788441 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:59:01.788445 | orchestrator | 2026-03-07 00:59:01.788450 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-03-07 00:59:01.788455 | orchestrator | Saturday 07 March 2026 00:56:46 +0000 (0:00:01.465) 0:10:01.831 ******** 2026-03-07 00:59:01.788460 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.788465 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.788470 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.788474 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.788479 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.788484 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.788489 | orchestrator | 2026-03-07 00:59:01.788494 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-03-07 00:59:01.788498 | orchestrator | Saturday 07 March 2026 00:56:47 +0000 (0:00:00.790) 0:10:02.621 ******** 2026-03-07 00:59:01.788503 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:59:01.788511 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:59:01.788516 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:59:01.788521 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:59:01.788526 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:59:01.788530 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:59:01.788535 | orchestrator | 2026-03-07 00:59:01.788540 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-03-07 00:59:01.788545 | orchestrator | Saturday 07 March 2026 00:56:49 +0000 (0:00:02.287) 0:10:04.908 ******** 2026-03-07 00:59:01.788550 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.788554 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.788559 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.788564 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:01.788569 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:01.788578 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:01.788582 | orchestrator | 2026-03-07 00:59:01.788587 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-03-07 00:59:01.788592 | orchestrator | 2026-03-07 00:59:01.788597 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-07 00:59:01.788602 | orchestrator | Saturday 07 March 2026 00:56:50 +0000 (0:00:00.965) 0:10:05.874 ******** 2026-03-07 00:59:01.788610 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:59:01.788615 | orchestrator | 2026-03-07 00:59:01.788620 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-07 00:59:01.788624 | orchestrator | Saturday 07 March 2026 00:56:51 +0000 (0:00:00.455) 0:10:06.329 ******** 2026-03-07 00:59:01.788629 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:59:01.788634 | orchestrator | 2026-03-07 00:59:01.788639 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-07 00:59:01.788644 | orchestrator | Saturday 07 March 2026 00:56:51 +0000 (0:00:00.688) 0:10:07.017 ******** 2026-03-07 00:59:01.788649 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.788653 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.788658 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.788663 | orchestrator | 2026-03-07 00:59:01.788668 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-07 00:59:01.788673 | orchestrator | Saturday 07 March 2026 00:56:52 +0000 (0:00:00.305) 0:10:07.323 ******** 2026-03-07 00:59:01.788678 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.788682 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.788687 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.788692 | orchestrator | 2026-03-07 00:59:01.788697 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-07 00:59:01.788702 | orchestrator | Saturday 07 March 2026 00:56:52 +0000 (0:00:00.674) 0:10:07.997 ******** 2026-03-07 00:59:01.788706 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.788711 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.788716 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.788721 | orchestrator | 2026-03-07 00:59:01.788726 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-07 00:59:01.788731 | orchestrator | Saturday 07 March 2026 00:56:53 +0000 (0:00:00.904) 0:10:08.902 ******** 2026-03-07 00:59:01.788736 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.788740 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.788745 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.788750 | orchestrator | 2026-03-07 00:59:01.788755 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-07 00:59:01.788760 | orchestrator | Saturday 07 March 2026 00:56:54 +0000 (0:00:00.676) 0:10:09.579 ******** 2026-03-07 00:59:01.788765 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.788769 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.788774 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.788779 | orchestrator | 2026-03-07 00:59:01.788784 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-07 00:59:01.788789 | orchestrator | Saturday 07 March 2026 00:56:54 +0000 (0:00:00.285) 0:10:09.864 ******** 2026-03-07 00:59:01.788794 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.788799 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.788803 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.788808 | orchestrator | 2026-03-07 00:59:01.788813 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-07 00:59:01.788818 | orchestrator | Saturday 07 March 2026 00:56:55 +0000 (0:00:00.337) 0:10:10.202 ******** 2026-03-07 00:59:01.788823 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.788827 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.788832 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.788841 | orchestrator | 2026-03-07 00:59:01.788846 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-07 00:59:01.788868 | orchestrator | Saturday 07 March 2026 00:56:55 +0000 (0:00:00.568) 0:10:10.771 ******** 2026-03-07 00:59:01.788874 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.788879 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.788883 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.788888 | orchestrator | 2026-03-07 00:59:01.788893 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-07 00:59:01.788898 | orchestrator | Saturday 07 March 2026 00:56:56 +0000 (0:00:00.731) 0:10:11.502 ******** 2026-03-07 00:59:01.788903 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.788907 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.788912 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.788917 | orchestrator | 2026-03-07 00:59:01.788922 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-07 00:59:01.788926 | orchestrator | Saturday 07 March 2026 00:56:57 +0000 (0:00:00.664) 0:10:12.166 ******** 2026-03-07 00:59:01.788931 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.788936 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.788941 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.788945 | orchestrator | 2026-03-07 00:59:01.788950 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-07 00:59:01.788955 | orchestrator | Saturday 07 March 2026 00:56:57 +0000 (0:00:00.282) 0:10:12.449 ******** 2026-03-07 00:59:01.788960 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.788968 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.788973 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.788978 | orchestrator | 2026-03-07 00:59:01.788982 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-07 00:59:01.788987 | orchestrator | Saturday 07 March 2026 00:56:57 +0000 (0:00:00.500) 0:10:12.949 ******** 2026-03-07 00:59:01.788992 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.788997 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.789002 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.789007 | orchestrator | 2026-03-07 00:59:01.789011 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-07 00:59:01.789016 | orchestrator | Saturday 07 March 2026 00:56:58 +0000 (0:00:00.360) 0:10:13.309 ******** 2026-03-07 00:59:01.789021 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.789026 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.789031 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.789035 | orchestrator | 2026-03-07 00:59:01.789040 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-07 00:59:01.789045 | orchestrator | Saturday 07 March 2026 00:56:58 +0000 (0:00:00.297) 0:10:13.607 ******** 2026-03-07 00:59:01.789050 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.789058 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.789062 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.789067 | orchestrator | 2026-03-07 00:59:01.789072 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-07 00:59:01.789077 | orchestrator | Saturday 07 March 2026 00:56:58 +0000 (0:00:00.314) 0:10:13.921 ******** 2026-03-07 00:59:01.789082 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.789087 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.789091 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.789096 | orchestrator | 2026-03-07 00:59:01.789101 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-07 00:59:01.789106 | orchestrator | Saturday 07 March 2026 00:56:59 +0000 (0:00:00.493) 0:10:14.415 ******** 2026-03-07 00:59:01.789111 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.789116 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.789121 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.789126 | orchestrator | 2026-03-07 00:59:01.789131 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-07 00:59:01.789139 | orchestrator | Saturday 07 March 2026 00:56:59 +0000 (0:00:00.281) 0:10:14.696 ******** 2026-03-07 00:59:01.789144 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.789149 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.789154 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.789159 | orchestrator | 2026-03-07 00:59:01.789164 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-07 00:59:01.789168 | orchestrator | Saturday 07 March 2026 00:56:59 +0000 (0:00:00.295) 0:10:14.991 ******** 2026-03-07 00:59:01.789173 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.789178 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.789183 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.789188 | orchestrator | 2026-03-07 00:59:01.789193 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-07 00:59:01.789198 | orchestrator | Saturday 07 March 2026 00:57:00 +0000 (0:00:00.315) 0:10:15.307 ******** 2026-03-07 00:59:01.789202 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.789207 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.789212 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.789217 | orchestrator | 2026-03-07 00:59:01.789222 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-03-07 00:59:01.789226 | orchestrator | Saturday 07 March 2026 00:57:00 +0000 (0:00:00.740) 0:10:16.048 ******** 2026-03-07 00:59:01.789231 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.789236 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.789241 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-03-07 00:59:01.789246 | orchestrator | 2026-03-07 00:59:01.789251 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-03-07 00:59:01.789255 | orchestrator | Saturday 07 March 2026 00:57:01 +0000 (0:00:00.412) 0:10:16.460 ******** 2026-03-07 00:59:01.789260 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-07 00:59:01.789265 | orchestrator | 2026-03-07 00:59:01.789270 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-03-07 00:59:01.789275 | orchestrator | Saturday 07 March 2026 00:57:03 +0000 (0:00:02.204) 0:10:18.665 ******** 2026-03-07 00:59:01.789281 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-03-07 00:59:01.789288 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.789293 | orchestrator | 2026-03-07 00:59:01.789297 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-03-07 00:59:01.789302 | orchestrator | Saturday 07 March 2026 00:57:04 +0000 (0:00:00.510) 0:10:19.176 ******** 2026-03-07 00:59:01.789308 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-07 00:59:01.789318 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-07 00:59:01.789323 | orchestrator | 2026-03-07 00:59:01.789328 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-03-07 00:59:01.789333 | orchestrator | Saturday 07 March 2026 00:57:12 +0000 (0:00:08.637) 0:10:27.813 ******** 2026-03-07 00:59:01.789341 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-07 00:59:01.789346 | orchestrator | 2026-03-07 00:59:01.789351 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-03-07 00:59:01.789356 | orchestrator | Saturday 07 March 2026 00:57:16 +0000 (0:00:03.961) 0:10:31.774 ******** 2026-03-07 00:59:01.789365 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:59:01.789370 | orchestrator | 2026-03-07 00:59:01.789374 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-03-07 00:59:01.789379 | orchestrator | Saturday 07 March 2026 00:57:17 +0000 (0:00:00.534) 0:10:32.309 ******** 2026-03-07 00:59:01.789384 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-07 00:59:01.789389 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-07 00:59:01.789393 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-07 00:59:01.789398 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-03-07 00:59:01.789406 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-03-07 00:59:01.789411 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-03-07 00:59:01.789416 | orchestrator | 2026-03-07 00:59:01.789420 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-03-07 00:59:01.789425 | orchestrator | Saturday 07 March 2026 00:57:18 +0000 (0:00:01.254) 0:10:33.564 ******** 2026-03-07 00:59:01.789430 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-07 00:59:01.789435 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-07 00:59:01.789440 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-07 00:59:01.789445 | orchestrator | 2026-03-07 00:59:01.789449 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-03-07 00:59:01.789454 | orchestrator | Saturday 07 March 2026 00:57:21 +0000 (0:00:02.753) 0:10:36.317 ******** 2026-03-07 00:59:01.789459 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-07 00:59:01.789464 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-07 00:59:01.789469 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:59:01.789474 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-07 00:59:01.789479 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-07 00:59:01.789483 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-07 00:59:01.789488 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:59:01.789493 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-07 00:59:01.789498 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:59:01.789503 | orchestrator | 2026-03-07 00:59:01.789508 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-03-07 00:59:01.789513 | orchestrator | Saturday 07 March 2026 00:57:22 +0000 (0:00:01.444) 0:10:37.762 ******** 2026-03-07 00:59:01.789517 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:59:01.789522 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:59:01.789527 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:59:01.789532 | orchestrator | 2026-03-07 00:59:01.789537 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-03-07 00:59:01.789541 | orchestrator | Saturday 07 March 2026 00:57:25 +0000 (0:00:02.625) 0:10:40.387 ******** 2026-03-07 00:59:01.789546 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.789551 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.789556 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.789561 | orchestrator | 2026-03-07 00:59:01.789566 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-03-07 00:59:01.789570 | orchestrator | Saturday 07 March 2026 00:57:25 +0000 (0:00:00.539) 0:10:40.927 ******** 2026-03-07 00:59:01.789575 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-5, testbed-node-4, testbed-node-3 2026-03-07 00:59:01.789580 | orchestrator | 2026-03-07 00:59:01.789585 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-03-07 00:59:01.789590 | orchestrator | Saturday 07 March 2026 00:57:26 +0000 (0:00:01.023) 0:10:41.950 ******** 2026-03-07 00:59:01.789598 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:59:01.789603 | orchestrator | 2026-03-07 00:59:01.789608 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-03-07 00:59:01.789613 | orchestrator | Saturday 07 March 2026 00:57:27 +0000 (0:00:00.605) 0:10:42.556 ******** 2026-03-07 00:59:01.789618 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:59:01.789623 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:59:01.789627 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:59:01.789632 | orchestrator | 2026-03-07 00:59:01.789637 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-03-07 00:59:01.789642 | orchestrator | Saturday 07 March 2026 00:57:29 +0000 (0:00:01.554) 0:10:44.110 ******** 2026-03-07 00:59:01.789647 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:59:01.789651 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:59:01.789656 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:59:01.789661 | orchestrator | 2026-03-07 00:59:01.789666 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-03-07 00:59:01.789671 | orchestrator | Saturday 07 March 2026 00:57:31 +0000 (0:00:01.970) 0:10:46.081 ******** 2026-03-07 00:59:01.789675 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:59:01.789680 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:59:01.789685 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:59:01.789690 | orchestrator | 2026-03-07 00:59:01.789695 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-03-07 00:59:01.789700 | orchestrator | Saturday 07 March 2026 00:57:33 +0000 (0:00:02.143) 0:10:48.224 ******** 2026-03-07 00:59:01.789704 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:59:01.789712 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:59:01.789716 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:59:01.789721 | orchestrator | 2026-03-07 00:59:01.789726 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-03-07 00:59:01.789731 | orchestrator | Saturday 07 March 2026 00:57:35 +0000 (0:00:02.249) 0:10:50.474 ******** 2026-03-07 00:59:01.789736 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.789741 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.789745 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.789750 | orchestrator | 2026-03-07 00:59:01.789755 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-07 00:59:01.789760 | orchestrator | Saturday 07 March 2026 00:57:37 +0000 (0:00:01.949) 0:10:52.423 ******** 2026-03-07 00:59:01.789765 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:59:01.789770 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:59:01.789774 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:59:01.789779 | orchestrator | 2026-03-07 00:59:01.789784 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-07 00:59:01.789789 | orchestrator | Saturday 07 March 2026 00:57:38 +0000 (0:00:00.900) 0:10:53.323 ******** 2026-03-07 00:59:01.789796 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:59:01.789801 | orchestrator | 2026-03-07 00:59:01.789806 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-07 00:59:01.789811 | orchestrator | Saturday 07 March 2026 00:57:39 +0000 (0:00:00.839) 0:10:54.163 ******** 2026-03-07 00:59:01.789816 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.789821 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.789825 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.789830 | orchestrator | 2026-03-07 00:59:01.789835 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-07 00:59:01.789840 | orchestrator | Saturday 07 March 2026 00:57:39 +0000 (0:00:00.359) 0:10:54.523 ******** 2026-03-07 00:59:01.789845 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:59:01.789875 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:59:01.789880 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:59:01.789888 | orchestrator | 2026-03-07 00:59:01.789893 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-07 00:59:01.789898 | orchestrator | Saturday 07 March 2026 00:57:40 +0000 (0:00:01.095) 0:10:55.618 ******** 2026-03-07 00:59:01.789903 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-07 00:59:01.789908 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-07 00:59:01.789913 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-07 00:59:01.789918 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.789922 | orchestrator | 2026-03-07 00:59:01.789927 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-07 00:59:01.789932 | orchestrator | Saturday 07 March 2026 00:57:41 +0000 (0:00:01.217) 0:10:56.835 ******** 2026-03-07 00:59:01.789937 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.789942 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.789947 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.789951 | orchestrator | 2026-03-07 00:59:01.789956 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-07 00:59:01.789961 | orchestrator | 2026-03-07 00:59:01.789966 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-07 00:59:01.789971 | orchestrator | Saturday 07 March 2026 00:57:42 +0000 (0:00:00.717) 0:10:57.552 ******** 2026-03-07 00:59:01.789976 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:59:01.789981 | orchestrator | 2026-03-07 00:59:01.789985 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-07 00:59:01.789990 | orchestrator | Saturday 07 March 2026 00:57:42 +0000 (0:00:00.463) 0:10:58.016 ******** 2026-03-07 00:59:01.789995 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:59:01.790000 | orchestrator | 2026-03-07 00:59:01.790005 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-07 00:59:01.790010 | orchestrator | Saturday 07 March 2026 00:57:43 +0000 (0:00:00.826) 0:10:58.843 ******** 2026-03-07 00:59:01.790033 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.790038 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.790043 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.790047 | orchestrator | 2026-03-07 00:59:01.790052 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-07 00:59:01.790057 | orchestrator | Saturday 07 March 2026 00:57:44 +0000 (0:00:00.348) 0:10:59.191 ******** 2026-03-07 00:59:01.790062 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.790067 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.790072 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.790076 | orchestrator | 2026-03-07 00:59:01.790081 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-07 00:59:01.790086 | orchestrator | Saturday 07 March 2026 00:57:44 +0000 (0:00:00.740) 0:10:59.932 ******** 2026-03-07 00:59:01.790091 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.790095 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.790100 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.790105 | orchestrator | 2026-03-07 00:59:01.790110 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-07 00:59:01.790114 | orchestrator | Saturday 07 March 2026 00:57:46 +0000 (0:00:01.202) 0:11:01.135 ******** 2026-03-07 00:59:01.790119 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.790124 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.790129 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.790133 | orchestrator | 2026-03-07 00:59:01.790138 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-07 00:59:01.790143 | orchestrator | Saturday 07 March 2026 00:57:46 +0000 (0:00:00.799) 0:11:01.935 ******** 2026-03-07 00:59:01.790148 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.790153 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.790161 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.790166 | orchestrator | 2026-03-07 00:59:01.790174 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-07 00:59:01.790179 | orchestrator | Saturday 07 March 2026 00:57:47 +0000 (0:00:00.380) 0:11:02.315 ******** 2026-03-07 00:59:01.790184 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.790189 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.790194 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.790198 | orchestrator | 2026-03-07 00:59:01.790203 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-07 00:59:01.790208 | orchestrator | Saturday 07 March 2026 00:57:47 +0000 (0:00:00.386) 0:11:02.702 ******** 2026-03-07 00:59:01.790213 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.790218 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.790222 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.790227 | orchestrator | 2026-03-07 00:59:01.790232 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-07 00:59:01.790237 | orchestrator | Saturday 07 March 2026 00:57:48 +0000 (0:00:00.755) 0:11:03.457 ******** 2026-03-07 00:59:01.790242 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.790246 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.790251 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.790256 | orchestrator | 2026-03-07 00:59:01.790264 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-07 00:59:01.790269 | orchestrator | Saturday 07 March 2026 00:57:49 +0000 (0:00:00.803) 0:11:04.261 ******** 2026-03-07 00:59:01.790274 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.790278 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.790283 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.790288 | orchestrator | 2026-03-07 00:59:01.790293 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-07 00:59:01.790297 | orchestrator | Saturday 07 March 2026 00:57:49 +0000 (0:00:00.774) 0:11:05.035 ******** 2026-03-07 00:59:01.790302 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.790307 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.790312 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.790317 | orchestrator | 2026-03-07 00:59:01.790321 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-07 00:59:01.790326 | orchestrator | Saturday 07 March 2026 00:57:50 +0000 (0:00:00.335) 0:11:05.371 ******** 2026-03-07 00:59:01.790331 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.790336 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.790341 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.790345 | orchestrator | 2026-03-07 00:59:01.790350 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-07 00:59:01.790355 | orchestrator | Saturday 07 March 2026 00:57:51 +0000 (0:00:00.726) 0:11:06.098 ******** 2026-03-07 00:59:01.790360 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.790364 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.790369 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.790374 | orchestrator | 2026-03-07 00:59:01.790379 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-07 00:59:01.790384 | orchestrator | Saturday 07 March 2026 00:57:51 +0000 (0:00:00.484) 0:11:06.582 ******** 2026-03-07 00:59:01.790388 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.790393 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.790398 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.790403 | orchestrator | 2026-03-07 00:59:01.790407 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-07 00:59:01.790412 | orchestrator | Saturday 07 March 2026 00:57:51 +0000 (0:00:00.418) 0:11:07.001 ******** 2026-03-07 00:59:01.790417 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.790422 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.790426 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.790431 | orchestrator | 2026-03-07 00:59:01.790440 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-07 00:59:01.790445 | orchestrator | Saturday 07 March 2026 00:57:52 +0000 (0:00:00.381) 0:11:07.383 ******** 2026-03-07 00:59:01.790450 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.790455 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.790459 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.790464 | orchestrator | 2026-03-07 00:59:01.790469 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-07 00:59:01.790474 | orchestrator | Saturday 07 March 2026 00:57:53 +0000 (0:00:00.767) 0:11:08.150 ******** 2026-03-07 00:59:01.790479 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.790483 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.790488 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.790493 | orchestrator | 2026-03-07 00:59:01.790498 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-07 00:59:01.790503 | orchestrator | Saturday 07 March 2026 00:57:53 +0000 (0:00:00.330) 0:11:08.481 ******** 2026-03-07 00:59:01.790507 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.790512 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.790517 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.790522 | orchestrator | 2026-03-07 00:59:01.790526 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-07 00:59:01.790531 | orchestrator | Saturday 07 March 2026 00:57:53 +0000 (0:00:00.360) 0:11:08.842 ******** 2026-03-07 00:59:01.790536 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.790541 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.790546 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.790550 | orchestrator | 2026-03-07 00:59:01.790555 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-07 00:59:01.790560 | orchestrator | Saturday 07 March 2026 00:57:54 +0000 (0:00:00.375) 0:11:09.218 ******** 2026-03-07 00:59:01.790565 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.790569 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.790574 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.790579 | orchestrator | 2026-03-07 00:59:01.790584 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-03-07 00:59:01.790588 | orchestrator | Saturday 07 March 2026 00:57:55 +0000 (0:00:00.894) 0:11:10.112 ******** 2026-03-07 00:59:01.790593 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:59:01.790598 | orchestrator | 2026-03-07 00:59:01.790603 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-07 00:59:01.790610 | orchestrator | Saturday 07 March 2026 00:57:55 +0000 (0:00:00.622) 0:11:10.735 ******** 2026-03-07 00:59:01.790615 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-07 00:59:01.790620 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-07 00:59:01.790625 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-07 00:59:01.790630 | orchestrator | 2026-03-07 00:59:01.790634 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-07 00:59:01.790639 | orchestrator | Saturday 07 March 2026 00:57:58 +0000 (0:00:02.385) 0:11:13.120 ******** 2026-03-07 00:59:01.790644 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-07 00:59:01.790649 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-07 00:59:01.790654 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:59:01.790658 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-07 00:59:01.790663 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-07 00:59:01.790668 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:59:01.790673 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-07 00:59:01.790680 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-07 00:59:01.790685 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:59:01.790690 | orchestrator | 2026-03-07 00:59:01.790698 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-03-07 00:59:01.790703 | orchestrator | Saturday 07 March 2026 00:57:59 +0000 (0:00:01.560) 0:11:14.681 ******** 2026-03-07 00:59:01.790708 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.790712 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.790717 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.790722 | orchestrator | 2026-03-07 00:59:01.790727 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-03-07 00:59:01.790732 | orchestrator | Saturday 07 March 2026 00:57:59 +0000 (0:00:00.346) 0:11:15.027 ******** 2026-03-07 00:59:01.790736 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:59:01.790741 | orchestrator | 2026-03-07 00:59:01.790746 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-03-07 00:59:01.790751 | orchestrator | Saturday 07 March 2026 00:58:00 +0000 (0:00:00.604) 0:11:15.632 ******** 2026-03-07 00:59:01.790756 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-07 00:59:01.790761 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-07 00:59:01.790766 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-07 00:59:01.790770 | orchestrator | 2026-03-07 00:59:01.790775 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-03-07 00:59:01.790780 | orchestrator | Saturday 07 March 2026 00:58:02 +0000 (0:00:01.599) 0:11:17.232 ******** 2026-03-07 00:59:01.790785 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-07 00:59:01.790790 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-07 00:59:01.790795 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-07 00:59:01.790799 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-07 00:59:01.790804 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-07 00:59:01.790809 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-07 00:59:01.790814 | orchestrator | 2026-03-07 00:59:01.790819 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-07 00:59:01.790824 | orchestrator | Saturday 07 March 2026 00:58:07 +0000 (0:00:05.219) 0:11:22.451 ******** 2026-03-07 00:59:01.790828 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-07 00:59:01.790833 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-07 00:59:01.790838 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-07 00:59:01.790843 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-07 00:59:01.790847 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-07 00:59:01.790881 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-07 00:59:01.790886 | orchestrator | 2026-03-07 00:59:01.790890 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-07 00:59:01.790895 | orchestrator | Saturday 07 March 2026 00:58:09 +0000 (0:00:02.418) 0:11:24.869 ******** 2026-03-07 00:59:01.790900 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-07 00:59:01.790905 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:59:01.790910 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-07 00:59:01.790919 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:59:01.790924 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-07 00:59:01.790929 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:59:01.790933 | orchestrator | 2026-03-07 00:59:01.790938 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-03-07 00:59:01.790946 | orchestrator | Saturday 07 March 2026 00:58:11 +0000 (0:00:01.327) 0:11:26.196 ******** 2026-03-07 00:59:01.790951 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-03-07 00:59:01.790956 | orchestrator | 2026-03-07 00:59:01.790961 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-03-07 00:59:01.790966 | orchestrator | Saturday 07 March 2026 00:58:11 +0000 (0:00:00.230) 0:11:26.426 ******** 2026-03-07 00:59:01.790971 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-07 00:59:01.790976 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-07 00:59:01.790981 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-07 00:59:01.790989 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-07 00:59:01.790994 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-07 00:59:01.790999 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.791004 | orchestrator | 2026-03-07 00:59:01.791009 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-03-07 00:59:01.791014 | orchestrator | Saturday 07 March 2026 00:58:12 +0000 (0:00:01.331) 0:11:27.758 ******** 2026-03-07 00:59:01.791019 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-07 00:59:01.791024 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-07 00:59:01.791028 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-07 00:59:01.791033 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-07 00:59:01.791038 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-07 00:59:01.791043 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.791048 | orchestrator | 2026-03-07 00:59:01.791053 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-03-07 00:59:01.791058 | orchestrator | Saturday 07 March 2026 00:58:13 +0000 (0:00:00.688) 0:11:28.446 ******** 2026-03-07 00:59:01.791063 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-07 00:59:01.791067 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-07 00:59:01.791073 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-07 00:59:01.791077 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-07 00:59:01.791082 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-07 00:59:01.791090 | orchestrator | 2026-03-07 00:59:01.791095 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-03-07 00:59:01.791100 | orchestrator | Saturday 07 March 2026 00:58:46 +0000 (0:00:32.694) 0:12:01.141 ******** 2026-03-07 00:59:01.791105 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.791110 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.791115 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.791120 | orchestrator | 2026-03-07 00:59:01.791124 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-03-07 00:59:01.791129 | orchestrator | Saturday 07 March 2026 00:58:46 +0000 (0:00:00.365) 0:12:01.506 ******** 2026-03-07 00:59:01.791134 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.791139 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.791144 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.791149 | orchestrator | 2026-03-07 00:59:01.791153 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-03-07 00:59:01.791158 | orchestrator | Saturday 07 March 2026 00:58:46 +0000 (0:00:00.348) 0:12:01.855 ******** 2026-03-07 00:59:01.791163 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:59:01.791168 | orchestrator | 2026-03-07 00:59:01.791173 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-03-07 00:59:01.791178 | orchestrator | Saturday 07 March 2026 00:58:47 +0000 (0:00:01.015) 0:12:02.870 ******** 2026-03-07 00:59:01.791182 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:59:01.791187 | orchestrator | 2026-03-07 00:59:01.791195 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-03-07 00:59:01.791200 | orchestrator | Saturday 07 March 2026 00:58:48 +0000 (0:00:00.598) 0:12:03.468 ******** 2026-03-07 00:59:01.791205 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:59:01.791210 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:59:01.791215 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:59:01.791219 | orchestrator | 2026-03-07 00:59:01.791224 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-03-07 00:59:01.791229 | orchestrator | Saturday 07 March 2026 00:58:49 +0000 (0:00:01.419) 0:12:04.887 ******** 2026-03-07 00:59:01.791234 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:59:01.791239 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:59:01.791243 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:59:01.791248 | orchestrator | 2026-03-07 00:59:01.791253 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-03-07 00:59:01.791258 | orchestrator | Saturday 07 March 2026 00:58:51 +0000 (0:00:01.723) 0:12:06.611 ******** 2026-03-07 00:59:01.791263 | orchestrator | changed: [testbed-node-3] 2026-03-07 00:59:01.791267 | orchestrator | changed: [testbed-node-5] 2026-03-07 00:59:01.791272 | orchestrator | changed: [testbed-node-4] 2026-03-07 00:59:01.791277 | orchestrator | 2026-03-07 00:59:01.791285 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-03-07 00:59:01.791290 | orchestrator | Saturday 07 March 2026 00:58:53 +0000 (0:00:01.967) 0:12:08.578 ******** 2026-03-07 00:59:01.791295 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-07 00:59:01.791300 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-07 00:59:01.791305 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-07 00:59:01.791309 | orchestrator | 2026-03-07 00:59:01.791314 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-07 00:59:01.791319 | orchestrator | Saturday 07 March 2026 00:58:56 +0000 (0:00:02.942) 0:12:11.520 ******** 2026-03-07 00:59:01.791327 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.791332 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.791337 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.791342 | orchestrator | 2026-03-07 00:59:01.791347 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-07 00:59:01.791351 | orchestrator | Saturday 07 March 2026 00:58:56 +0000 (0:00:00.372) 0:12:11.892 ******** 2026-03-07 00:59:01.791356 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 00:59:01.791361 | orchestrator | 2026-03-07 00:59:01.791366 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-07 00:59:01.791371 | orchestrator | Saturday 07 March 2026 00:58:57 +0000 (0:00:00.552) 0:12:12.445 ******** 2026-03-07 00:59:01.791376 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.791381 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.791386 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.791394 | orchestrator | 2026-03-07 00:59:01.791398 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-07 00:59:01.791403 | orchestrator | Saturday 07 March 2026 00:58:58 +0000 (0:00:00.682) 0:12:13.128 ******** 2026-03-07 00:59:01.791408 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.791413 | orchestrator | skipping: [testbed-node-4] 2026-03-07 00:59:01.791418 | orchestrator | skipping: [testbed-node-5] 2026-03-07 00:59:01.791422 | orchestrator | 2026-03-07 00:59:01.791427 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-07 00:59:01.791432 | orchestrator | Saturday 07 March 2026 00:58:58 +0000 (0:00:00.361) 0:12:13.489 ******** 2026-03-07 00:59:01.791437 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-07 00:59:01.791442 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-07 00:59:01.791446 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-07 00:59:01.791451 | orchestrator | skipping: [testbed-node-3] 2026-03-07 00:59:01.791456 | orchestrator | 2026-03-07 00:59:01.791461 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-07 00:59:01.791465 | orchestrator | Saturday 07 March 2026 00:58:59 +0000 (0:00:00.634) 0:12:14.123 ******** 2026-03-07 00:59:01.791470 | orchestrator | ok: [testbed-node-3] 2026-03-07 00:59:01.791475 | orchestrator | ok: [testbed-node-4] 2026-03-07 00:59:01.791480 | orchestrator | ok: [testbed-node-5] 2026-03-07 00:59:01.791484 | orchestrator | 2026-03-07 00:59:01.791489 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:59:01.791494 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-03-07 00:59:01.791499 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-03-07 00:59:01.791504 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-03-07 00:59:01.791509 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-03-07 00:59:01.791513 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-03-07 00:59:01.791521 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-03-07 00:59:01.791526 | orchestrator | 2026-03-07 00:59:01.791531 | orchestrator | 2026-03-07 00:59:01.791536 | orchestrator | 2026-03-07 00:59:01.791541 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:59:01.791546 | orchestrator | Saturday 07 March 2026 00:58:59 +0000 (0:00:00.277) 0:12:14.401 ******** 2026-03-07 00:59:01.791554 | orchestrator | =============================================================================== 2026-03-07 00:59:01.791559 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 42.06s 2026-03-07 00:59:01.791563 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 42.04s 2026-03-07 00:59:01.791568 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 32.69s 2026-03-07 00:59:01.791573 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 30.76s 2026-03-07 00:59:01.791578 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.01s 2026-03-07 00:59:01.791582 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 15.75s 2026-03-07 00:59:01.791590 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.76s 2026-03-07 00:59:01.791595 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 11.92s 2026-03-07 00:59:01.791600 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.76s 2026-03-07 00:59:01.791604 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.64s 2026-03-07 00:59:01.791609 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.97s 2026-03-07 00:59:01.791614 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.71s 2026-03-07 00:59:01.791618 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.41s 2026-03-07 00:59:01.791623 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 5.22s 2026-03-07 00:59:01.791628 | orchestrator | ceph-config : Generate Ceph file ---------------------------------------- 4.95s 2026-03-07 00:59:01.791633 | orchestrator | ceph-facts : Set_fact rgw_instances ------------------------------------- 4.71s 2026-03-07 00:59:01.791637 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.63s 2026-03-07 00:59:01.791642 | orchestrator | ceph-mon : Generate systemd unit file for mon container ----------------- 4.28s 2026-03-07 00:59:01.791647 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 4.26s 2026-03-07 00:59:01.791652 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 4.07s 2026-03-07 00:59:01.791657 | orchestrator | 2026-03-07 00:59:01 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:59:04.821553 | orchestrator | 2026-03-07 00:59:04 | INFO  | Task f3ef2882-100c-4207-b598-42bcc552901a is in state STARTED 2026-03-07 00:59:04.826542 | orchestrator | 2026-03-07 00:59:04 | INFO  | Task a8b4a5a2-b965-43a3-bd0e-1f7d24c29831 is in state STARTED 2026-03-07 00:59:04.828658 | orchestrator | 2026-03-07 00:59:04 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 00:59:04.828834 | orchestrator | 2026-03-07 00:59:04 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:59:07.893432 | orchestrator | 2026-03-07 00:59:07 | INFO  | Task f3ef2882-100c-4207-b598-42bcc552901a is in state STARTED 2026-03-07 00:59:07.894850 | orchestrator | 2026-03-07 00:59:07 | INFO  | Task a8b4a5a2-b965-43a3-bd0e-1f7d24c29831 is in state STARTED 2026-03-07 00:59:07.896372 | orchestrator | 2026-03-07 00:59:07 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 00:59:07.896422 | orchestrator | 2026-03-07 00:59:07 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:59:10.960850 | orchestrator | 2026-03-07 00:59:10 | INFO  | Task f3ef2882-100c-4207-b598-42bcc552901a is in state STARTED 2026-03-07 00:59:10.963016 | orchestrator | 2026-03-07 00:59:10 | INFO  | Task a8b4a5a2-b965-43a3-bd0e-1f7d24c29831 is in state STARTED 2026-03-07 00:59:10.966306 | orchestrator | 2026-03-07 00:59:10 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 00:59:10.966988 | orchestrator | 2026-03-07 00:59:10 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:59:14.027284 | orchestrator | 2026-03-07 00:59:14 | INFO  | Task f3ef2882-100c-4207-b598-42bcc552901a is in state STARTED 2026-03-07 00:59:14.028213 | orchestrator | 2026-03-07 00:59:14 | INFO  | Task a8b4a5a2-b965-43a3-bd0e-1f7d24c29831 is in state STARTED 2026-03-07 00:59:14.029954 | orchestrator | 2026-03-07 00:59:14 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 00:59:14.029985 | orchestrator | 2026-03-07 00:59:14 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:59:17.082928 | orchestrator | 2026-03-07 00:59:17 | INFO  | Task f3ef2882-100c-4207-b598-42bcc552901a is in state STARTED 2026-03-07 00:59:17.084173 | orchestrator | 2026-03-07 00:59:17 | INFO  | Task a8b4a5a2-b965-43a3-bd0e-1f7d24c29831 is in state STARTED 2026-03-07 00:59:17.087439 | orchestrator | 2026-03-07 00:59:17 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 00:59:17.087493 | orchestrator | 2026-03-07 00:59:17 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:59:20.140756 | orchestrator | 2026-03-07 00:59:20 | INFO  | Task f3ef2882-100c-4207-b598-42bcc552901a is in state STARTED 2026-03-07 00:59:20.142297 | orchestrator | 2026-03-07 00:59:20 | INFO  | Task a8b4a5a2-b965-43a3-bd0e-1f7d24c29831 is in state STARTED 2026-03-07 00:59:20.144046 | orchestrator | 2026-03-07 00:59:20 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 00:59:20.144212 | orchestrator | 2026-03-07 00:59:20 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:59:23.194065 | orchestrator | 2026-03-07 00:59:23 | INFO  | Task f3ef2882-100c-4207-b598-42bcc552901a is in state STARTED 2026-03-07 00:59:23.198276 | orchestrator | 2026-03-07 00:59:23 | INFO  | Task a8b4a5a2-b965-43a3-bd0e-1f7d24c29831 is in state STARTED 2026-03-07 00:59:23.200798 | orchestrator | 2026-03-07 00:59:23 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 00:59:23.200883 | orchestrator | 2026-03-07 00:59:23 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:59:26.254366 | orchestrator | 2026-03-07 00:59:26 | INFO  | Task f3ef2882-100c-4207-b598-42bcc552901a is in state STARTED 2026-03-07 00:59:26.256069 | orchestrator | 2026-03-07 00:59:26 | INFO  | Task a8b4a5a2-b965-43a3-bd0e-1f7d24c29831 is in state STARTED 2026-03-07 00:59:26.256708 | orchestrator | 2026-03-07 00:59:26 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 00:59:26.256737 | orchestrator | 2026-03-07 00:59:26 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:59:29.306693 | orchestrator | 2026-03-07 00:59:29 | INFO  | Task f3ef2882-100c-4207-b598-42bcc552901a is in state STARTED 2026-03-07 00:59:29.308440 | orchestrator | 2026-03-07 00:59:29 | INFO  | Task a8b4a5a2-b965-43a3-bd0e-1f7d24c29831 is in state STARTED 2026-03-07 00:59:29.310315 | orchestrator | 2026-03-07 00:59:29 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 00:59:29.310368 | orchestrator | 2026-03-07 00:59:29 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:59:32.359001 | orchestrator | 2026-03-07 00:59:32 | INFO  | Task f3ef2882-100c-4207-b598-42bcc552901a is in state STARTED 2026-03-07 00:59:32.360088 | orchestrator | 2026-03-07 00:59:32 | INFO  | Task a8b4a5a2-b965-43a3-bd0e-1f7d24c29831 is in state STARTED 2026-03-07 00:59:32.362048 | orchestrator | 2026-03-07 00:59:32 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 00:59:32.362101 | orchestrator | 2026-03-07 00:59:32 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:59:35.427059 | orchestrator | 2026-03-07 00:59:35 | INFO  | Task f3ef2882-100c-4207-b598-42bcc552901a is in state STARTED 2026-03-07 00:59:35.427929 | orchestrator | 2026-03-07 00:59:35 | INFO  | Task a8b4a5a2-b965-43a3-bd0e-1f7d24c29831 is in state STARTED 2026-03-07 00:59:35.430509 | orchestrator | 2026-03-07 00:59:35 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 00:59:35.430570 | orchestrator | 2026-03-07 00:59:35 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:59:38.488737 | orchestrator | 2026-03-07 00:59:38 | INFO  | Task f3ef2882-100c-4207-b598-42bcc552901a is in state STARTED 2026-03-07 00:59:38.491160 | orchestrator | 2026-03-07 00:59:38 | INFO  | Task a8b4a5a2-b965-43a3-bd0e-1f7d24c29831 is in state STARTED 2026-03-07 00:59:38.493057 | orchestrator | 2026-03-07 00:59:38 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 00:59:38.493487 | orchestrator | 2026-03-07 00:59:38 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:59:41.546108 | orchestrator | 2026-03-07 00:59:41 | INFO  | Task f3ef2882-100c-4207-b598-42bcc552901a is in state STARTED 2026-03-07 00:59:41.548006 | orchestrator | 2026-03-07 00:59:41 | INFO  | Task a8b4a5a2-b965-43a3-bd0e-1f7d24c29831 is in state STARTED 2026-03-07 00:59:41.549423 | orchestrator | 2026-03-07 00:59:41 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 00:59:41.549474 | orchestrator | 2026-03-07 00:59:41 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:59:44.595587 | orchestrator | 2026-03-07 00:59:44 | INFO  | Task f3ef2882-100c-4207-b598-42bcc552901a is in state STARTED 2026-03-07 00:59:44.598193 | orchestrator | 2026-03-07 00:59:44 | INFO  | Task a8b4a5a2-b965-43a3-bd0e-1f7d24c29831 is in state STARTED 2026-03-07 00:59:44.599794 | orchestrator | 2026-03-07 00:59:44 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 00:59:44.599995 | orchestrator | 2026-03-07 00:59:44 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:59:47.649757 | orchestrator | 2026-03-07 00:59:47 | INFO  | Task f3ef2882-100c-4207-b598-42bcc552901a is in state STARTED 2026-03-07 00:59:47.651858 | orchestrator | 2026-03-07 00:59:47 | INFO  | Task a8b4a5a2-b965-43a3-bd0e-1f7d24c29831 is in state STARTED 2026-03-07 00:59:47.653558 | orchestrator | 2026-03-07 00:59:47 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 00:59:47.653606 | orchestrator | 2026-03-07 00:59:47 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:59:50.701671 | orchestrator | 2026-03-07 00:59:50 | INFO  | Task f3ef2882-100c-4207-b598-42bcc552901a is in state STARTED 2026-03-07 00:59:50.704671 | orchestrator | 2026-03-07 00:59:50 | INFO  | Task a8b4a5a2-b965-43a3-bd0e-1f7d24c29831 is in state SUCCESS 2026-03-07 00:59:50.707044 | orchestrator | 2026-03-07 00:59:50.707113 | orchestrator | 2026-03-07 00:59:50.707122 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-07 00:59:50.707129 | orchestrator | 2026-03-07 00:59:50.707135 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-07 00:59:50.707141 | orchestrator | Saturday 07 March 2026 00:57:03 +0000 (0:00:00.246) 0:00:00.246 ******** 2026-03-07 00:59:50.707146 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:50.707153 | orchestrator | ok: [testbed-node-1] 2026-03-07 00:59:50.707158 | orchestrator | ok: [testbed-node-2] 2026-03-07 00:59:50.707164 | orchestrator | 2026-03-07 00:59:50.707169 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-07 00:59:50.707175 | orchestrator | Saturday 07 March 2026 00:57:03 +0000 (0:00:00.341) 0:00:00.588 ******** 2026-03-07 00:59:50.707181 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-03-07 00:59:50.707187 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-03-07 00:59:50.707212 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-03-07 00:59:50.707218 | orchestrator | 2026-03-07 00:59:50.707224 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-03-07 00:59:50.707229 | orchestrator | 2026-03-07 00:59:50.707234 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-07 00:59:50.707239 | orchestrator | Saturday 07 March 2026 00:57:04 +0000 (0:00:00.559) 0:00:01.148 ******** 2026-03-07 00:59:50.707244 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:59:50.707250 | orchestrator | 2026-03-07 00:59:50.707255 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-03-07 00:59:50.707260 | orchestrator | Saturday 07 March 2026 00:57:05 +0000 (0:00:00.538) 0:00:01.687 ******** 2026-03-07 00:59:50.707266 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-07 00:59:50.707271 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-07 00:59:50.707276 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-07 00:59:50.707281 | orchestrator | 2026-03-07 00:59:50.707286 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-03-07 00:59:50.707292 | orchestrator | Saturday 07 March 2026 00:57:05 +0000 (0:00:00.744) 0:00:02.432 ******** 2026-03-07 00:59:50.707299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-07 00:59:50.707308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-07 00:59:50.707334 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-07 00:59:50.707347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-07 00:59:50.707355 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-07 00:59:50.707361 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-07 00:59:50.707367 | orchestrator | 2026-03-07 00:59:50.707373 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-07 00:59:50.707379 | orchestrator | Saturday 07 March 2026 00:57:07 +0000 (0:00:02.013) 0:00:04.445 ******** 2026-03-07 00:59:50.707384 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:59:50.707389 | orchestrator | 2026-03-07 00:59:50.707395 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-03-07 00:59:50.707400 | orchestrator | Saturday 07 March 2026 00:57:08 +0000 (0:00:00.564) 0:00:05.009 ******** 2026-03-07 00:59:50.707415 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-07 00:59:50.707533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-07 00:59:50.707543 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-07 00:59:50.707549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-07 00:59:50.707564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-07 00:59:50.707575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-07 00:59:50.707580 | orchestrator | 2026-03-07 00:59:50.707586 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-03-07 00:59:50.707591 | orchestrator | Saturday 07 March 2026 00:57:11 +0000 (0:00:02.659) 0:00:07.669 ******** 2026-03-07 00:59:50.707597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-07 00:59:50.707602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-07 00:59:50.707608 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:50.707626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-07 00:59:50.707632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-07 00:59:50.707637 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:50.707643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-07 00:59:50.707649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-07 00:59:50.707654 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:50.707667 | orchestrator | 2026-03-07 00:59:50.707673 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-03-07 00:59:50.707678 | orchestrator | Saturday 07 March 2026 00:57:11 +0000 (0:00:00.890) 0:00:08.560 ******** 2026-03-07 00:59:50.707690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-07 00:59:50.707700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-07 00:59:50.707708 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:50.707717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-07 00:59:50.707726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-07 00:59:50.707757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-07 00:59:50.707767 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:50.707775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-07 00:59:50.707784 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:50.707792 | orchestrator | 2026-03-07 00:59:50.707800 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-03-07 00:59:50.707809 | orchestrator | Saturday 07 March 2026 00:57:12 +0000 (0:00:00.827) 0:00:09.388 ******** 2026-03-07 00:59:50.707817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-07 00:59:50.707826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-07 00:59:50.707841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-07 00:59:50.707861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-07 00:59:50.707871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-07 00:59:50.707901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-07 00:59:50.707917 | orchestrator | 2026-03-07 00:59:50.707927 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-03-07 00:59:50.707933 | orchestrator | Saturday 07 March 2026 00:57:15 +0000 (0:00:02.457) 0:00:11.845 ******** 2026-03-07 00:59:50.707938 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:59:50.707944 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:59:50.707949 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:59:50.707954 | orchestrator | 2026-03-07 00:59:50.707959 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-03-07 00:59:50.707964 | orchestrator | Saturday 07 March 2026 00:57:17 +0000 (0:00:02.712) 0:00:14.558 ******** 2026-03-07 00:59:50.707969 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:59:50.707974 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:59:50.707980 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:59:50.707985 | orchestrator | 2026-03-07 00:59:50.707990 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-03-07 00:59:50.707995 | orchestrator | Saturday 07 March 2026 00:57:19 +0000 (0:00:01.939) 0:00:16.498 ******** 2026-03-07 00:59:50.708011 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-07 00:59:50.708018 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-07 00:59:50.708023 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-07 00:59:50.708029 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-07 00:59:50.708046 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-07 00:59:50.708053 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-07 00:59:50.708059 | orchestrator | 2026-03-07 00:59:50.708064 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-07 00:59:50.708069 | orchestrator | Saturday 07 March 2026 00:57:22 +0000 (0:00:02.429) 0:00:18.928 ******** 2026-03-07 00:59:50.708074 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:50.708080 | orchestrator | skipping: [testbed-node-1] 2026-03-07 00:59:50.708085 | orchestrator | skipping: [testbed-node-2] 2026-03-07 00:59:50.708090 | orchestrator | 2026-03-07 00:59:50.708096 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-07 00:59:50.708102 | orchestrator | Saturday 07 March 2026 00:57:22 +0000 (0:00:00.294) 0:00:19.222 ******** 2026-03-07 00:59:50.708107 | orchestrator | 2026-03-07 00:59:50.708112 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-07 00:59:50.708124 | orchestrator | Saturday 07 March 2026 00:57:22 +0000 (0:00:00.068) 0:00:19.291 ******** 2026-03-07 00:59:50.708130 | orchestrator | 2026-03-07 00:59:50.708135 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-07 00:59:50.708140 | orchestrator | Saturday 07 March 2026 00:57:22 +0000 (0:00:00.073) 0:00:19.365 ******** 2026-03-07 00:59:50.708145 | orchestrator | 2026-03-07 00:59:50.708150 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-03-07 00:59:50.708156 | orchestrator | Saturday 07 March 2026 00:57:22 +0000 (0:00:00.066) 0:00:19.431 ******** 2026-03-07 00:59:50.708162 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:50.708168 | orchestrator | 2026-03-07 00:59:50.708174 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-03-07 00:59:50.708180 | orchestrator | Saturday 07 March 2026 00:57:23 +0000 (0:00:00.579) 0:00:20.010 ******** 2026-03-07 00:59:50.708186 | orchestrator | skipping: [testbed-node-0] 2026-03-07 00:59:50.708192 | orchestrator | 2026-03-07 00:59:50.708200 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-03-07 00:59:50.708209 | orchestrator | Saturday 07 March 2026 00:57:23 +0000 (0:00:00.210) 0:00:20.221 ******** 2026-03-07 00:59:50.708217 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:59:50.708231 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:59:50.708244 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:59:50.708252 | orchestrator | 2026-03-07 00:59:50.708260 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-03-07 00:59:50.708268 | orchestrator | Saturday 07 March 2026 00:58:19 +0000 (0:00:56.096) 0:01:16.317 ******** 2026-03-07 00:59:50.708276 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:59:50.708284 | orchestrator | changed: [testbed-node-2] 2026-03-07 00:59:50.708293 | orchestrator | changed: [testbed-node-1] 2026-03-07 00:59:50.708301 | orchestrator | 2026-03-07 00:59:50.708308 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-07 00:59:50.708316 | orchestrator | Saturday 07 March 2026 00:59:36 +0000 (0:01:17.118) 0:02:33.435 ******** 2026-03-07 00:59:50.708325 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 00:59:50.708334 | orchestrator | 2026-03-07 00:59:50.708343 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-03-07 00:59:50.708352 | orchestrator | Saturday 07 March 2026 00:59:37 +0000 (0:00:00.835) 0:02:34.271 ******** 2026-03-07 00:59:50.708361 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:50.708372 | orchestrator | 2026-03-07 00:59:50.708380 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-03-07 00:59:50.708389 | orchestrator | Saturday 07 March 2026 00:59:40 +0000 (0:00:02.591) 0:02:36.862 ******** 2026-03-07 00:59:50.708397 | orchestrator | ok: [testbed-node-0] 2026-03-07 00:59:50.708402 | orchestrator | 2026-03-07 00:59:50.708407 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-03-07 00:59:50.708413 | orchestrator | Saturday 07 March 2026 00:59:42 +0000 (0:00:02.413) 0:02:39.276 ******** 2026-03-07 00:59:50.708418 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:59:50.708423 | orchestrator | 2026-03-07 00:59:50.708429 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-03-07 00:59:50.708434 | orchestrator | Saturday 07 March 2026 00:59:45 +0000 (0:00:02.855) 0:02:42.132 ******** 2026-03-07 00:59:50.708445 | orchestrator | changed: [testbed-node-0] 2026-03-07 00:59:50.708450 | orchestrator | 2026-03-07 00:59:50.708462 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 00:59:50.708471 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-07 00:59:50.708481 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-07 00:59:50.708496 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-07 00:59:50.708503 | orchestrator | 2026-03-07 00:59:50.708511 | orchestrator | 2026-03-07 00:59:50.708518 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 00:59:50.708526 | orchestrator | Saturday 07 March 2026 00:59:48 +0000 (0:00:02.631) 0:02:44.763 ******** 2026-03-07 00:59:50.708533 | orchestrator | =============================================================================== 2026-03-07 00:59:50.708541 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 77.12s 2026-03-07 00:59:50.708549 | orchestrator | opensearch : Restart opensearch container ------------------------------ 56.10s 2026-03-07 00:59:50.708556 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.86s 2026-03-07 00:59:50.708564 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.71s 2026-03-07 00:59:50.708572 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.66s 2026-03-07 00:59:50.708580 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.63s 2026-03-07 00:59:50.708593 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.59s 2026-03-07 00:59:50.708603 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.46s 2026-03-07 00:59:50.708610 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.43s 2026-03-07 00:59:50.708618 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.41s 2026-03-07 00:59:50.708627 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 2.01s 2026-03-07 00:59:50.708635 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.94s 2026-03-07 00:59:50.708642 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.89s 2026-03-07 00:59:50.708651 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.84s 2026-03-07 00:59:50.708659 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.83s 2026-03-07 00:59:50.708666 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.74s 2026-03-07 00:59:50.708674 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 0.58s 2026-03-07 00:59:50.708682 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.56s 2026-03-07 00:59:50.708690 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.56s 2026-03-07 00:59:50.708698 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.54s 2026-03-07 00:59:50.708705 | orchestrator | 2026-03-07 00:59:50 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 00:59:50.708713 | orchestrator | 2026-03-07 00:59:50 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:59:53.769647 | orchestrator | 2026-03-07 00:59:53 | INFO  | Task f3ef2882-100c-4207-b598-42bcc552901a is in state STARTED 2026-03-07 00:59:53.770767 | orchestrator | 2026-03-07 00:59:53 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 00:59:53.771291 | orchestrator | 2026-03-07 00:59:53 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:59:56.827082 | orchestrator | 2026-03-07 00:59:56 | INFO  | Task f3ef2882-100c-4207-b598-42bcc552901a is in state STARTED 2026-03-07 00:59:56.828648 | orchestrator | 2026-03-07 00:59:56 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 00:59:56.828712 | orchestrator | 2026-03-07 00:59:56 | INFO  | Wait 1 second(s) until the next check 2026-03-07 00:59:59.877408 | orchestrator | 2026-03-07 00:59:59 | INFO  | Task f3ef2882-100c-4207-b598-42bcc552901a is in state STARTED 2026-03-07 00:59:59.880260 | orchestrator | 2026-03-07 00:59:59 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 00:59:59.880412 | orchestrator | 2026-03-07 00:59:59 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:00:02.940878 | orchestrator | 2026-03-07 01:00:02 | INFO  | Task f3ef2882-100c-4207-b598-42bcc552901a is in state STARTED 2026-03-07 01:00:02.942692 | orchestrator | 2026-03-07 01:00:02 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 01:00:02.942762 | orchestrator | 2026-03-07 01:00:02 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:00:06.001775 | orchestrator | 2026-03-07 01:00:06 | INFO  | Task f3ef2882-100c-4207-b598-42bcc552901a is in state STARTED 2026-03-07 01:00:06.003416 | orchestrator | 2026-03-07 01:00:06 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 01:00:06.003494 | orchestrator | 2026-03-07 01:00:06 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:00:09.058394 | orchestrator | 2026-03-07 01:00:09 | INFO  | Task f3ef2882-100c-4207-b598-42bcc552901a is in state STARTED 2026-03-07 01:00:09.060128 | orchestrator | 2026-03-07 01:00:09 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 01:00:09.060199 | orchestrator | 2026-03-07 01:00:09 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:00:12.113062 | orchestrator | 2026-03-07 01:00:12 | INFO  | Task f3ef2882-100c-4207-b598-42bcc552901a is in state STARTED 2026-03-07 01:00:12.115709 | orchestrator | 2026-03-07 01:00:12 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 01:00:12.116093 | orchestrator | 2026-03-07 01:00:12 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:00:15.163814 | orchestrator | 2026-03-07 01:00:15 | INFO  | Task f3ef2882-100c-4207-b598-42bcc552901a is in state STARTED 2026-03-07 01:00:15.163964 | orchestrator | 2026-03-07 01:00:15 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 01:00:15.163976 | orchestrator | 2026-03-07 01:00:15 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:00:18.207842 | orchestrator | 2026-03-07 01:00:18 | INFO  | Task f3ef2882-100c-4207-b598-42bcc552901a is in state STARTED 2026-03-07 01:00:18.208916 | orchestrator | 2026-03-07 01:00:18 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state STARTED 2026-03-07 01:00:18.208948 | orchestrator | 2026-03-07 01:00:18 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:00:21.259805 | orchestrator | 2026-03-07 01:00:21 | INFO  | Task f3ef2882-100c-4207-b598-42bcc552901a is in state STARTED 2026-03-07 01:00:21.262405 | orchestrator | 2026-03-07 01:00:21 | INFO  | Task f009412f-3063-4903-abb8-4f4ec4dd6d05 is in state STARTED 2026-03-07 01:00:21.264809 | orchestrator | 2026-03-07 01:00:21 | INFO  | Task c358628d-3ad0-43a1-87a1-21fcaed0179f is in state STARTED 2026-03-07 01:00:21.268467 | orchestrator | 2026-03-07 01:00:21 | INFO  | Task 35547cbd-566b-4576-9dbf-28772f908ebe is in state SUCCESS 2026-03-07 01:00:21.270633 | orchestrator | 2026-03-07 01:00:21.270709 | orchestrator | 2026-03-07 01:00:21.270726 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-03-07 01:00:21.270739 | orchestrator | 2026-03-07 01:00:21.270750 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-03-07 01:00:21.270762 | orchestrator | Saturday 07 March 2026 00:57:03 +0000 (0:00:00.085) 0:00:00.085 ******** 2026-03-07 01:00:21.271120 | orchestrator | ok: [localhost] => { 2026-03-07 01:00:21.271137 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-03-07 01:00:21.271149 | orchestrator | } 2026-03-07 01:00:21.271161 | orchestrator | 2026-03-07 01:00:21.271172 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-03-07 01:00:21.271213 | orchestrator | Saturday 07 March 2026 00:57:03 +0000 (0:00:00.041) 0:00:00.127 ******** 2026-03-07 01:00:21.271226 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-03-07 01:00:21.271238 | orchestrator | ...ignoring 2026-03-07 01:00:21.271249 | orchestrator | 2026-03-07 01:00:21.271260 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-03-07 01:00:21.271271 | orchestrator | Saturday 07 March 2026 00:57:06 +0000 (0:00:02.886) 0:00:03.013 ******** 2026-03-07 01:00:21.271282 | orchestrator | skipping: [localhost] 2026-03-07 01:00:21.271293 | orchestrator | 2026-03-07 01:00:21.271304 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-03-07 01:00:21.271314 | orchestrator | Saturday 07 March 2026 00:57:06 +0000 (0:00:00.067) 0:00:03.081 ******** 2026-03-07 01:00:21.271325 | orchestrator | ok: [localhost] 2026-03-07 01:00:21.271336 | orchestrator | 2026-03-07 01:00:21.271368 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-07 01:00:21.271380 | orchestrator | 2026-03-07 01:00:21.271391 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-07 01:00:21.271402 | orchestrator | Saturday 07 March 2026 00:57:06 +0000 (0:00:00.201) 0:00:03.282 ******** 2026-03-07 01:00:21.271413 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:00:21.271424 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:00:21.271435 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:00:21.271446 | orchestrator | 2026-03-07 01:00:21.271456 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-07 01:00:21.271468 | orchestrator | Saturday 07 March 2026 00:57:07 +0000 (0:00:00.361) 0:00:03.644 ******** 2026-03-07 01:00:21.271478 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-07 01:00:21.271490 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-07 01:00:21.271501 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-07 01:00:21.271512 | orchestrator | 2026-03-07 01:00:21.271523 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-07 01:00:21.271534 | orchestrator | 2026-03-07 01:00:21.271545 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-07 01:00:21.271570 | orchestrator | Saturday 07 March 2026 00:57:07 +0000 (0:00:00.606) 0:00:04.250 ******** 2026-03-07 01:00:21.271582 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-07 01:00:21.271594 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-07 01:00:21.271605 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-07 01:00:21.271616 | orchestrator | 2026-03-07 01:00:21.271627 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-07 01:00:21.271638 | orchestrator | Saturday 07 March 2026 00:57:08 +0000 (0:00:00.422) 0:00:04.672 ******** 2026-03-07 01:00:21.271649 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:00:21.271660 | orchestrator | 2026-03-07 01:00:21.271671 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-03-07 01:00:21.271682 | orchestrator | Saturday 07 March 2026 00:57:08 +0000 (0:00:00.596) 0:00:05.269 ******** 2026-03-07 01:00:21.271717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-07 01:00:21.271752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-07 01:00:21.271776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-07 01:00:21.271806 | orchestrator | 2026-03-07 01:00:21.271837 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-03-07 01:00:21.271858 | orchestrator | Saturday 07 March 2026 00:57:11 +0000 (0:00:02.838) 0:00:08.107 ******** 2026-03-07 01:00:21.271876 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:00:21.271984 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:00:21.272000 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:00:21.272014 | orchestrator | 2026-03-07 01:00:21.272025 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-03-07 01:00:21.272035 | orchestrator | Saturday 07 March 2026 00:57:12 +0000 (0:00:00.551) 0:00:08.659 ******** 2026-03-07 01:00:21.272046 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:00:21.272057 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:00:21.272067 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:00:21.272078 | orchestrator | 2026-03-07 01:00:21.272089 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-03-07 01:00:21.272099 | orchestrator | Saturday 07 March 2026 00:57:13 +0000 (0:00:01.527) 0:00:10.186 ******** 2026-03-07 01:00:21.272118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-07 01:00:21.272141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-07 01:00:21.272168 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-07 01:00:21.272181 | orchestrator | 2026-03-07 01:00:21.272192 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-03-07 01:00:21.272204 | orchestrator | Saturday 07 March 2026 00:57:17 +0000 (0:00:03.585) 0:00:13.772 ******** 2026-03-07 01:00:21.272215 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:00:21.272226 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:00:21.272237 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:00:21.272249 | orchestrator | 2026-03-07 01:00:21.272260 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-03-07 01:00:21.272271 | orchestrator | Saturday 07 March 2026 00:57:18 +0000 (0:00:01.130) 0:00:14.903 ******** 2026-03-07 01:00:21.272288 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:00:21.272299 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:00:21.272310 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:00:21.272321 | orchestrator | 2026-03-07 01:00:21.272332 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-07 01:00:21.272343 | orchestrator | Saturday 07 March 2026 00:57:23 +0000 (0:00:04.685) 0:00:19.589 ******** 2026-03-07 01:00:21.272354 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:00:21.272365 | orchestrator | 2026-03-07 01:00:21.272376 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-07 01:00:21.272387 | orchestrator | Saturday 07 March 2026 00:57:23 +0000 (0:00:00.487) 0:00:20.076 ******** 2026-03-07 01:00:21.272408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-07 01:00:21.272421 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:00:21.272445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-07 01:00:21.272465 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:00:21.272484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-07 01:00:21.272497 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:00:21.272507 | orchestrator | 2026-03-07 01:00:21.272517 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-07 01:00:21.272527 | orchestrator | Saturday 07 March 2026 00:57:27 +0000 (0:00:03.932) 0:00:24.009 ******** 2026-03-07 01:00:21.272542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-07 01:00:21.272559 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:00:21.272575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-07 01:00:21.272587 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:00:21.272602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-07 01:00:21.272618 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:00:21.272628 | orchestrator | 2026-03-07 01:00:21.272638 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-07 01:00:21.272648 | orchestrator | Saturday 07 March 2026 00:57:31 +0000 (0:00:04.419) 0:00:28.429 ******** 2026-03-07 01:00:21.272664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-07 01:00:21.272675 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:00:21.272685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-07 01:00:21.272702 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:00:21.272745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-07 01:00:21.272757 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:00:21.272767 | orchestrator | 2026-03-07 01:00:21.272777 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-03-07 01:00:21.272787 | orchestrator | Saturday 07 March 2026 00:57:35 +0000 (0:00:03.779) 0:00:32.208 ******** 2026-03-07 01:00:21.272805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-07 01:00:21.272828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-07 01:00:21.272847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-07 01:00:21.272864 | orchestrator | 2026-03-07 01:00:21.272874 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-03-07 01:00:21.272884 | orchestrator | Saturday 07 March 2026 00:57:39 +0000 (0:00:03.432) 0:00:35.641 ******** 2026-03-07 01:00:21.272913 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:00:21.272924 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:00:21.272934 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:00:21.272944 | orchestrator | 2026-03-07 01:00:21.272953 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-03-07 01:00:21.272963 | orchestrator | Saturday 07 March 2026 00:57:40 +0000 (0:00:01.005) 0:00:36.647 ******** 2026-03-07 01:00:21.272973 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:00:21.272983 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:00:21.272993 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:00:21.273003 | orchestrator | 2026-03-07 01:00:21.273012 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-03-07 01:00:21.273027 | orchestrator | Saturday 07 March 2026 00:57:40 +0000 (0:00:00.341) 0:00:36.988 ******** 2026-03-07 01:00:21.273037 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:00:21.273047 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:00:21.273057 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:00:21.273066 | orchestrator | 2026-03-07 01:00:21.273076 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-03-07 01:00:21.273086 | orchestrator | Saturday 07 March 2026 00:57:40 +0000 (0:00:00.368) 0:00:37.357 ******** 2026-03-07 01:00:21.273097 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-03-07 01:00:21.273108 | orchestrator | ...ignoring 2026-03-07 01:00:21.273118 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-03-07 01:00:21.273127 | orchestrator | ...ignoring 2026-03-07 01:00:21.273137 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-03-07 01:00:21.273147 | orchestrator | ...ignoring 2026-03-07 01:00:21.273157 | orchestrator | 2026-03-07 01:00:21.273166 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-03-07 01:00:21.273176 | orchestrator | Saturday 07 March 2026 00:57:51 +0000 (0:00:10.885) 0:00:48.243 ******** 2026-03-07 01:00:21.273186 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:00:21.273196 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:00:21.273205 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:00:21.273215 | orchestrator | 2026-03-07 01:00:21.273225 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-03-07 01:00:21.273235 | orchestrator | Saturday 07 March 2026 00:57:52 +0000 (0:00:00.531) 0:00:48.775 ******** 2026-03-07 01:00:21.273245 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:00:21.273255 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:00:21.273265 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:00:21.273275 | orchestrator | 2026-03-07 01:00:21.273285 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-03-07 01:00:21.273295 | orchestrator | Saturday 07 March 2026 00:57:53 +0000 (0:00:00.793) 0:00:49.568 ******** 2026-03-07 01:00:21.273304 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:00:21.273314 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:00:21.273324 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:00:21.273334 | orchestrator | 2026-03-07 01:00:21.273344 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-03-07 01:00:21.273353 | orchestrator | Saturday 07 March 2026 00:57:53 +0000 (0:00:00.452) 0:00:50.020 ******** 2026-03-07 01:00:21.273363 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:00:21.273373 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:00:21.273383 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:00:21.273399 | orchestrator | 2026-03-07 01:00:21.273408 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-03-07 01:00:21.273424 | orchestrator | Saturday 07 March 2026 00:57:54 +0000 (0:00:00.498) 0:00:50.519 ******** 2026-03-07 01:00:21.273441 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:00:21.273456 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:00:21.273469 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:00:21.273483 | orchestrator | 2026-03-07 01:00:21.273496 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-03-07 01:00:21.273521 | orchestrator | Saturday 07 March 2026 00:57:54 +0000 (0:00:00.506) 0:00:51.025 ******** 2026-03-07 01:00:21.273538 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:00:21.273554 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:00:21.273569 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:00:21.273586 | orchestrator | 2026-03-07 01:00:21.273600 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-07 01:00:21.273616 | orchestrator | Saturday 07 March 2026 00:57:55 +0000 (0:00:00.807) 0:00:51.833 ******** 2026-03-07 01:00:21.273631 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:00:21.273647 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:00:21.273663 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-03-07 01:00:21.273679 | orchestrator | 2026-03-07 01:00:21.273695 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-03-07 01:00:21.273710 | orchestrator | Saturday 07 March 2026 00:57:55 +0000 (0:00:00.402) 0:00:52.236 ******** 2026-03-07 01:00:21.273725 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:00:21.273740 | orchestrator | 2026-03-07 01:00:21.273755 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-03-07 01:00:21.273770 | orchestrator | Saturday 07 March 2026 00:58:07 +0000 (0:00:11.575) 0:01:03.812 ******** 2026-03-07 01:00:21.273785 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:00:21.273929 | orchestrator | 2026-03-07 01:00:21.273953 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-07 01:00:21.273970 | orchestrator | Saturday 07 March 2026 00:58:07 +0000 (0:00:00.135) 0:01:03.947 ******** 2026-03-07 01:00:21.273987 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:00:21.274003 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:00:21.274121 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:00:21.274138 | orchestrator | 2026-03-07 01:00:21.274148 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-03-07 01:00:21.274158 | orchestrator | Saturday 07 March 2026 00:58:08 +0000 (0:00:01.182) 0:01:05.129 ******** 2026-03-07 01:00:21.274168 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:00:21.274178 | orchestrator | 2026-03-07 01:00:21.274188 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-03-07 01:00:21.274202 | orchestrator | Saturday 07 March 2026 00:58:17 +0000 (0:00:08.767) 0:01:13.897 ******** 2026-03-07 01:00:21.274218 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:00:21.274234 | orchestrator | 2026-03-07 01:00:21.274247 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-03-07 01:00:21.274262 | orchestrator | Saturday 07 March 2026 00:58:19 +0000 (0:00:01.786) 0:01:15.683 ******** 2026-03-07 01:00:21.274288 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:00:21.274303 | orchestrator | 2026-03-07 01:00:21.274320 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-03-07 01:00:21.274336 | orchestrator | Saturday 07 March 2026 00:58:22 +0000 (0:00:03.676) 0:01:19.360 ******** 2026-03-07 01:00:21.274352 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:00:21.274368 | orchestrator | 2026-03-07 01:00:21.274384 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-03-07 01:00:21.274400 | orchestrator | Saturday 07 March 2026 00:58:23 +0000 (0:00:00.177) 0:01:19.537 ******** 2026-03-07 01:00:21.274415 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:00:21.274449 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:00:21.274465 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:00:21.274482 | orchestrator | 2026-03-07 01:00:21.274495 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-03-07 01:00:21.274505 | orchestrator | Saturday 07 March 2026 00:58:23 +0000 (0:00:00.424) 0:01:19.961 ******** 2026-03-07 01:00:21.274514 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:00:21.274524 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-03-07 01:00:21.274534 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:00:21.274544 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:00:21.274553 | orchestrator | 2026-03-07 01:00:21.274563 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-07 01:00:21.274573 | orchestrator | skipping: no hosts matched 2026-03-07 01:00:21.274582 | orchestrator | 2026-03-07 01:00:21.274592 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-07 01:00:21.274601 | orchestrator | 2026-03-07 01:00:21.274611 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-07 01:00:21.274621 | orchestrator | Saturday 07 March 2026 00:58:24 +0000 (0:00:01.178) 0:01:21.140 ******** 2026-03-07 01:00:21.274631 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:00:21.274641 | orchestrator | 2026-03-07 01:00:21.274651 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-07 01:00:21.274661 | orchestrator | Saturday 07 March 2026 00:58:44 +0000 (0:00:20.161) 0:01:41.301 ******** 2026-03-07 01:00:21.274671 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:00:21.274681 | orchestrator | 2026-03-07 01:00:21.274691 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-07 01:00:21.274701 | orchestrator | Saturday 07 March 2026 00:59:00 +0000 (0:00:15.677) 0:01:56.979 ******** 2026-03-07 01:00:21.274710 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:00:21.274720 | orchestrator | 2026-03-07 01:00:21.274729 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-07 01:00:21.274739 | orchestrator | 2026-03-07 01:00:21.274749 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-07 01:00:21.274759 | orchestrator | Saturday 07 March 2026 00:59:03 +0000 (0:00:02.804) 0:01:59.784 ******** 2026-03-07 01:00:21.274768 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:00:21.274778 | orchestrator | 2026-03-07 01:00:21.274788 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-07 01:00:21.274814 | orchestrator | Saturday 07 March 2026 00:59:27 +0000 (0:00:24.145) 0:02:23.930 ******** 2026-03-07 01:00:21.274824 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:00:21.274834 | orchestrator | 2026-03-07 01:00:21.274843 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-07 01:00:21.274853 | orchestrator | Saturday 07 March 2026 00:59:39 +0000 (0:00:11.619) 0:02:35.549 ******** 2026-03-07 01:00:21.274863 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:00:21.274872 | orchestrator | 2026-03-07 01:00:21.274882 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-07 01:00:21.274920 | orchestrator | 2026-03-07 01:00:21.274937 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-07 01:00:21.274947 | orchestrator | Saturday 07 March 2026 00:59:41 +0000 (0:00:02.805) 0:02:38.354 ******** 2026-03-07 01:00:21.274956 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:00:21.274966 | orchestrator | 2026-03-07 01:00:21.274976 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-07 01:00:21.274985 | orchestrator | Saturday 07 March 2026 01:00:00 +0000 (0:00:18.180) 0:02:56.535 ******** 2026-03-07 01:00:21.274995 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:00:21.275005 | orchestrator | 2026-03-07 01:00:21.275015 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-07 01:00:21.275025 | orchestrator | Saturday 07 March 2026 01:00:00 +0000 (0:00:00.575) 0:02:57.111 ******** 2026-03-07 01:00:21.275034 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:00:21.275056 | orchestrator | 2026-03-07 01:00:21.275066 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-07 01:00:21.275076 | orchestrator | 2026-03-07 01:00:21.275086 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-07 01:00:21.275095 | orchestrator | Saturday 07 March 2026 01:00:04 +0000 (0:00:03.438) 0:03:00.549 ******** 2026-03-07 01:00:21.275105 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:00:21.275115 | orchestrator | 2026-03-07 01:00:21.275125 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-03-07 01:00:21.275159 | orchestrator | Saturday 07 March 2026 01:00:04 +0000 (0:00:00.681) 0:03:01.231 ******** 2026-03-07 01:00:21.275169 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:00:21.275180 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:00:21.275189 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:00:21.275199 | orchestrator | 2026-03-07 01:00:21.275209 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-03-07 01:00:21.275218 | orchestrator | Saturday 07 March 2026 01:00:07 +0000 (0:00:02.637) 0:03:03.869 ******** 2026-03-07 01:00:21.275228 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:00:21.275238 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:00:21.275248 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:00:21.275258 | orchestrator | 2026-03-07 01:00:21.275268 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-03-07 01:00:21.275277 | orchestrator | Saturday 07 March 2026 01:00:10 +0000 (0:00:02.634) 0:03:06.504 ******** 2026-03-07 01:00:21.275287 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:00:21.275303 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:00:21.275313 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:00:21.275323 | orchestrator | 2026-03-07 01:00:21.275332 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-03-07 01:00:21.275342 | orchestrator | Saturday 07 March 2026 01:00:12 +0000 (0:00:02.428) 0:03:08.932 ******** 2026-03-07 01:00:21.275351 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:00:21.275361 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:00:21.275370 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:00:21.275450 | orchestrator | 2026-03-07 01:00:21.275471 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-03-07 01:00:21.275487 | orchestrator | Saturday 07 March 2026 01:00:14 +0000 (0:00:02.522) 0:03:11.455 ******** 2026-03-07 01:00:21.275503 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:00:21.275521 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:00:21.275538 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:00:21.275553 | orchestrator | 2026-03-07 01:00:21.275570 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-07 01:00:21.275580 | orchestrator | Saturday 07 March 2026 01:00:18 +0000 (0:00:03.455) 0:03:14.910 ******** 2026-03-07 01:00:21.275590 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:00:21.275599 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:00:21.275609 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:00:21.275618 | orchestrator | 2026-03-07 01:00:21.275628 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 01:00:21.275639 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-07 01:00:21.275650 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-03-07 01:00:21.275661 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-07 01:00:21.275671 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-07 01:00:21.275689 | orchestrator | 2026-03-07 01:00:21.275699 | orchestrator | 2026-03-07 01:00:21.275709 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 01:00:21.275719 | orchestrator | Saturday 07 March 2026 01:00:18 +0000 (0:00:00.282) 0:03:15.192 ******** 2026-03-07 01:00:21.275728 | orchestrator | =============================================================================== 2026-03-07 01:00:21.275738 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 44.31s 2026-03-07 01:00:21.275748 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 27.30s 2026-03-07 01:00:21.275767 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 18.18s 2026-03-07 01:00:21.275777 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 11.58s 2026-03-07 01:00:21.275787 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.89s 2026-03-07 01:00:21.275796 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.77s 2026-03-07 01:00:21.275806 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.61s 2026-03-07 01:00:21.275816 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.69s 2026-03-07 01:00:21.275825 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 4.42s 2026-03-07 01:00:21.275835 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.93s 2026-03-07 01:00:21.275844 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.78s 2026-03-07 01:00:21.275854 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 3.68s 2026-03-07 01:00:21.275863 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.59s 2026-03-07 01:00:21.275873 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.46s 2026-03-07 01:00:21.275883 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 3.44s 2026-03-07 01:00:21.275916 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.43s 2026-03-07 01:00:21.275927 | orchestrator | Check MariaDB service --------------------------------------------------- 2.89s 2026-03-07 01:00:21.275937 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.84s 2026-03-07 01:00:21.275947 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.64s 2026-03-07 01:00:21.275956 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.64s 2026-03-07 01:00:21.275966 | orchestrator | 2026-03-07 01:00:21 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:00:24.327976 | orchestrator | 2026-03-07 01:00:24 | INFO  | Task f3ef2882-100c-4207-b598-42bcc552901a is in state STARTED 2026-03-07 01:00:24.328671 | orchestrator | 2026-03-07 01:00:24 | INFO  | Task f009412f-3063-4903-abb8-4f4ec4dd6d05 is in state STARTED 2026-03-07 01:00:24.331011 | orchestrator | 2026-03-07 01:00:24 | INFO  | Task c358628d-3ad0-43a1-87a1-21fcaed0179f is in state STARTED 2026-03-07 01:00:24.331049 | orchestrator | 2026-03-07 01:00:24 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:00:27.399564 | orchestrator | 2026-03-07 01:00:27 | INFO  | Task f3ef2882-100c-4207-b598-42bcc552901a is in state STARTED 2026-03-07 01:00:27.399830 | orchestrator | 2026-03-07 01:00:27 | INFO  | Task f009412f-3063-4903-abb8-4f4ec4dd6d05 is in state STARTED 2026-03-07 01:00:27.400864 | orchestrator | 2026-03-07 01:00:27 | INFO  | Task c358628d-3ad0-43a1-87a1-21fcaed0179f is in state STARTED 2026-03-07 01:00:27.400981 | orchestrator | 2026-03-07 01:00:27 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:00:30.447989 | orchestrator | 2026-03-07 01:00:30 | INFO  | Task f3ef2882-100c-4207-b598-42bcc552901a is in state STARTED 2026-03-07 01:00:30.451359 | orchestrator | 2026-03-07 01:00:30 | INFO  | Task f009412f-3063-4903-abb8-4f4ec4dd6d05 is in state STARTED 2026-03-07 01:00:30.453972 | orchestrator | 2026-03-07 01:00:30 | INFO  | Task c358628d-3ad0-43a1-87a1-21fcaed0179f is in state STARTED 2026-03-07 01:00:30.454464 | orchestrator | 2026-03-07 01:00:30 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:00:33.495010 | orchestrator | 2026-03-07 01:00:33 | INFO  | Task f3ef2882-100c-4207-b598-42bcc552901a is in state STARTED 2026-03-07 01:00:33.495795 | orchestrator | 2026-03-07 01:00:33 | INFO  | Task f009412f-3063-4903-abb8-4f4ec4dd6d05 is in state STARTED 2026-03-07 01:00:33.496235 | orchestrator | 2026-03-07 01:00:33 | INFO  | Task c358628d-3ad0-43a1-87a1-21fcaed0179f is in state STARTED 2026-03-07 01:00:33.497384 | orchestrator | 2026-03-07 01:00:33 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:00:36.541872 | orchestrator | 2026-03-07 01:00:36 | INFO  | Task f3ef2882-100c-4207-b598-42bcc552901a is in state STARTED 2026-03-07 01:00:36.543791 | orchestrator | 2026-03-07 01:00:36 | INFO  | Task f009412f-3063-4903-abb8-4f4ec4dd6d05 is in state STARTED 2026-03-07 01:00:36.546143 | orchestrator | 2026-03-07 01:00:36 | INFO  | Task c358628d-3ad0-43a1-87a1-21fcaed0179f is in state STARTED 2026-03-07 01:00:36.546598 | orchestrator | 2026-03-07 01:00:36 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:00:39.604185 | orchestrator | 2026-03-07 01:00:39 | INFO  | Task f3ef2882-100c-4207-b598-42bcc552901a is in state STARTED 2026-03-07 01:00:39.605148 | orchestrator | 2026-03-07 01:00:39 | INFO  | Task f009412f-3063-4903-abb8-4f4ec4dd6d05 is in state STARTED 2026-03-07 01:00:39.606465 | orchestrator | 2026-03-07 01:00:39 | INFO  | Task c358628d-3ad0-43a1-87a1-21fcaed0179f is in state STARTED 2026-03-07 01:00:39.606540 | orchestrator | 2026-03-07 01:00:39 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:00:42.650179 | orchestrator | 2026-03-07 01:00:42 | INFO  | Task f3ef2882-100c-4207-b598-42bcc552901a is in state STARTED 2026-03-07 01:00:42.650264 | orchestrator | 2026-03-07 01:00:42 | INFO  | Task f009412f-3063-4903-abb8-4f4ec4dd6d05 is in state STARTED 2026-03-07 01:00:42.651995 | orchestrator | 2026-03-07 01:00:42 | INFO  | Task c358628d-3ad0-43a1-87a1-21fcaed0179f is in state STARTED 2026-03-07 01:00:42.652082 | orchestrator | 2026-03-07 01:00:42 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:00:45.689084 | orchestrator | 2026-03-07 01:00:45 | INFO  | Task f3ef2882-100c-4207-b598-42bcc552901a is in state STARTED 2026-03-07 01:00:45.691884 | orchestrator | 2026-03-07 01:00:45 | INFO  | Task f009412f-3063-4903-abb8-4f4ec4dd6d05 is in state STARTED 2026-03-07 01:00:45.695096 | orchestrator | 2026-03-07 01:00:45 | INFO  | Task c358628d-3ad0-43a1-87a1-21fcaed0179f is in state STARTED 2026-03-07 01:00:45.695234 | orchestrator | 2026-03-07 01:00:45 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:00:48.738239 | orchestrator | 2026-03-07 01:00:48 | INFO  | Task f3ef2882-100c-4207-b598-42bcc552901a is in state STARTED 2026-03-07 01:00:48.738573 | orchestrator | 2026-03-07 01:00:48 | INFO  | Task f009412f-3063-4903-abb8-4f4ec4dd6d05 is in state STARTED 2026-03-07 01:00:48.739260 | orchestrator | 2026-03-07 01:00:48 | INFO  | Task c358628d-3ad0-43a1-87a1-21fcaed0179f is in state STARTED 2026-03-07 01:00:48.739764 | orchestrator | 2026-03-07 01:00:48 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:00:51.777368 | orchestrator | 2026-03-07 01:00:51 | INFO  | Task f3ef2882-100c-4207-b598-42bcc552901a is in state STARTED 2026-03-07 01:00:51.778315 | orchestrator | 2026-03-07 01:00:51 | INFO  | Task f009412f-3063-4903-abb8-4f4ec4dd6d05 is in state STARTED 2026-03-07 01:00:51.780315 | orchestrator | 2026-03-07 01:00:51 | INFO  | Task c358628d-3ad0-43a1-87a1-21fcaed0179f is in state STARTED 2026-03-07 01:00:51.780357 | orchestrator | 2026-03-07 01:00:51 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:00:54.848742 | orchestrator | 2026-03-07 01:00:54 | INFO  | Task f3ef2882-100c-4207-b598-42bcc552901a is in state STARTED 2026-03-07 01:00:54.848816 | orchestrator | 2026-03-07 01:00:54 | INFO  | Task f009412f-3063-4903-abb8-4f4ec4dd6d05 is in state STARTED 2026-03-07 01:00:54.848822 | orchestrator | 2026-03-07 01:00:54 | INFO  | Task c358628d-3ad0-43a1-87a1-21fcaed0179f is in state STARTED 2026-03-07 01:00:54.848826 | orchestrator | 2026-03-07 01:00:54 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:00:57.884179 | orchestrator | 2026-03-07 01:00:57 | INFO  | Task f3ef2882-100c-4207-b598-42bcc552901a is in state STARTED 2026-03-07 01:00:57.887183 | orchestrator | 2026-03-07 01:00:57 | INFO  | Task f009412f-3063-4903-abb8-4f4ec4dd6d05 is in state STARTED 2026-03-07 01:00:57.888964 | orchestrator | 2026-03-07 01:00:57 | INFO  | Task c358628d-3ad0-43a1-87a1-21fcaed0179f is in state STARTED 2026-03-07 01:00:57.889016 | orchestrator | 2026-03-07 01:00:57 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:01:00.936203 | orchestrator | 2026-03-07 01:01:00 | INFO  | Task f3ef2882-100c-4207-b598-42bcc552901a is in state STARTED 2026-03-07 01:01:00.938238 | orchestrator | 2026-03-07 01:01:00 | INFO  | Task f009412f-3063-4903-abb8-4f4ec4dd6d05 is in state STARTED 2026-03-07 01:01:00.939653 | orchestrator | 2026-03-07 01:01:00 | INFO  | Task c358628d-3ad0-43a1-87a1-21fcaed0179f is in state STARTED 2026-03-07 01:01:00.939893 | orchestrator | 2026-03-07 01:01:00 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:01:03.990542 | orchestrator | 2026-03-07 01:01:03 | INFO  | Task f3ef2882-100c-4207-b598-42bcc552901a is in state STARTED 2026-03-07 01:01:03.990628 | orchestrator | 2026-03-07 01:01:03 | INFO  | Task f009412f-3063-4903-abb8-4f4ec4dd6d05 is in state STARTED 2026-03-07 01:01:03.992158 | orchestrator | 2026-03-07 01:01:03 | INFO  | Task c358628d-3ad0-43a1-87a1-21fcaed0179f is in state STARTED 2026-03-07 01:01:03.992206 | orchestrator | 2026-03-07 01:01:03 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:01:07.037118 | orchestrator | 2026-03-07 01:01:07 | INFO  | Task f3ef2882-100c-4207-b598-42bcc552901a is in state STARTED 2026-03-07 01:01:07.038391 | orchestrator | 2026-03-07 01:01:07 | INFO  | Task f009412f-3063-4903-abb8-4f4ec4dd6d05 is in state STARTED 2026-03-07 01:01:07.043156 | orchestrator | 2026-03-07 01:01:07 | INFO  | Task c358628d-3ad0-43a1-87a1-21fcaed0179f is in state STARTED 2026-03-07 01:01:07.043266 | orchestrator | 2026-03-07 01:01:07 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:01:10.085527 | orchestrator | 2026-03-07 01:01:10 | INFO  | Task f3ef2882-100c-4207-b598-42bcc552901a is in state STARTED 2026-03-07 01:01:10.086638 | orchestrator | 2026-03-07 01:01:10 | INFO  | Task f009412f-3063-4903-abb8-4f4ec4dd6d05 is in state STARTED 2026-03-07 01:01:10.088014 | orchestrator | 2026-03-07 01:01:10 | INFO  | Task c358628d-3ad0-43a1-87a1-21fcaed0179f is in state STARTED 2026-03-07 01:01:10.088053 | orchestrator | 2026-03-07 01:01:10 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:01:13.132537 | orchestrator | 2026-03-07 01:01:13 | INFO  | Task f3ef2882-100c-4207-b598-42bcc552901a is in state STARTED 2026-03-07 01:01:13.135436 | orchestrator | 2026-03-07 01:01:13 | INFO  | Task f009412f-3063-4903-abb8-4f4ec4dd6d05 is in state STARTED 2026-03-07 01:01:13.138785 | orchestrator | 2026-03-07 01:01:13 | INFO  | Task c358628d-3ad0-43a1-87a1-21fcaed0179f is in state STARTED 2026-03-07 01:01:13.138890 | orchestrator | 2026-03-07 01:01:13 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:01:16.187893 | orchestrator | 2026-03-07 01:01:16 | INFO  | Task f3ef2882-100c-4207-b598-42bcc552901a is in state STARTED 2026-03-07 01:01:16.189032 | orchestrator | 2026-03-07 01:01:16 | INFO  | Task f009412f-3063-4903-abb8-4f4ec4dd6d05 is in state STARTED 2026-03-07 01:01:16.191236 | orchestrator | 2026-03-07 01:01:16 | INFO  | Task c358628d-3ad0-43a1-87a1-21fcaed0179f is in state STARTED 2026-03-07 01:01:16.191461 | orchestrator | 2026-03-07 01:01:16 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:01:19.250461 | orchestrator | 2026-03-07 01:01:19 | INFO  | Task f3ef2882-100c-4207-b598-42bcc552901a is in state STARTED 2026-03-07 01:01:19.251076 | orchestrator | 2026-03-07 01:01:19 | INFO  | Task f009412f-3063-4903-abb8-4f4ec4dd6d05 is in state STARTED 2026-03-07 01:01:19.257493 | orchestrator | 2026-03-07 01:01:19 | INFO  | Task c358628d-3ad0-43a1-87a1-21fcaed0179f is in state STARTED 2026-03-07 01:01:19.258402 | orchestrator | 2026-03-07 01:01:19 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:01:22.309376 | orchestrator | 2026-03-07 01:01:22 | INFO  | Task f3ef2882-100c-4207-b598-42bcc552901a is in state SUCCESS 2026-03-07 01:01:22.312284 | orchestrator | 2026-03-07 01:01:22.312419 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-07 01:01:22.312467 | orchestrator | 2.16.14 2026-03-07 01:01:22.312474 | orchestrator | 2026-03-07 01:01:22.312479 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-03-07 01:01:22.312485 | orchestrator | 2026-03-07 01:01:22.312490 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-07 01:01:22.312495 | orchestrator | Saturday 07 March 2026 00:59:05 +0000 (0:00:00.658) 0:00:00.658 ******** 2026-03-07 01:01:22.312500 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 01:01:22.312506 | orchestrator | 2026-03-07 01:01:22.312510 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-07 01:01:22.312515 | orchestrator | Saturday 07 March 2026 00:59:06 +0000 (0:00:00.718) 0:00:01.377 ******** 2026-03-07 01:01:22.312520 | orchestrator | ok: [testbed-node-5] 2026-03-07 01:01:22.312525 | orchestrator | ok: [testbed-node-3] 2026-03-07 01:01:22.312529 | orchestrator | ok: [testbed-node-4] 2026-03-07 01:01:22.312534 | orchestrator | 2026-03-07 01:01:22.312538 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-07 01:01:22.312543 | orchestrator | Saturday 07 March 2026 00:59:06 +0000 (0:00:00.655) 0:00:02.033 ******** 2026-03-07 01:01:22.312547 | orchestrator | ok: [testbed-node-3] 2026-03-07 01:01:22.312552 | orchestrator | ok: [testbed-node-4] 2026-03-07 01:01:22.312556 | orchestrator | ok: [testbed-node-5] 2026-03-07 01:01:22.312560 | orchestrator | 2026-03-07 01:01:22.312565 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-07 01:01:22.312569 | orchestrator | Saturday 07 March 2026 00:59:07 +0000 (0:00:00.345) 0:00:02.378 ******** 2026-03-07 01:01:22.312574 | orchestrator | ok: [testbed-node-3] 2026-03-07 01:01:22.312611 | orchestrator | ok: [testbed-node-4] 2026-03-07 01:01:22.312616 | orchestrator | ok: [testbed-node-5] 2026-03-07 01:01:22.312621 | orchestrator | 2026-03-07 01:01:22.312625 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-07 01:01:22.312630 | orchestrator | Saturday 07 March 2026 00:59:07 +0000 (0:00:00.873) 0:00:03.252 ******** 2026-03-07 01:01:22.312634 | orchestrator | ok: [testbed-node-3] 2026-03-07 01:01:22.312639 | orchestrator | ok: [testbed-node-4] 2026-03-07 01:01:22.312643 | orchestrator | ok: [testbed-node-5] 2026-03-07 01:01:22.312648 | orchestrator | 2026-03-07 01:01:22.312653 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-07 01:01:22.312677 | orchestrator | Saturday 07 March 2026 00:59:08 +0000 (0:00:00.362) 0:00:03.615 ******** 2026-03-07 01:01:22.312682 | orchestrator | ok: [testbed-node-3] 2026-03-07 01:01:22.312686 | orchestrator | ok: [testbed-node-4] 2026-03-07 01:01:22.312691 | orchestrator | ok: [testbed-node-5] 2026-03-07 01:01:22.312695 | orchestrator | 2026-03-07 01:01:22.312699 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-07 01:01:22.312704 | orchestrator | Saturday 07 March 2026 00:59:08 +0000 (0:00:00.335) 0:00:03.950 ******** 2026-03-07 01:01:22.312708 | orchestrator | ok: [testbed-node-3] 2026-03-07 01:01:22.312713 | orchestrator | ok: [testbed-node-4] 2026-03-07 01:01:22.312897 | orchestrator | ok: [testbed-node-5] 2026-03-07 01:01:22.312903 | orchestrator | 2026-03-07 01:01:22.312907 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-07 01:01:22.312912 | orchestrator | Saturday 07 March 2026 00:59:08 +0000 (0:00:00.378) 0:00:04.329 ******** 2026-03-07 01:01:22.312917 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:01:22.312922 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:01:22.312946 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:01:22.312950 | orchestrator | 2026-03-07 01:01:22.312955 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-07 01:01:22.312959 | orchestrator | Saturday 07 March 2026 00:59:09 +0000 (0:00:00.651) 0:00:04.981 ******** 2026-03-07 01:01:22.312964 | orchestrator | ok: [testbed-node-3] 2026-03-07 01:01:22.312968 | orchestrator | ok: [testbed-node-4] 2026-03-07 01:01:22.312973 | orchestrator | ok: [testbed-node-5] 2026-03-07 01:01:22.312977 | orchestrator | 2026-03-07 01:01:22.312982 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-07 01:01:22.312986 | orchestrator | Saturday 07 March 2026 00:59:09 +0000 (0:00:00.348) 0:00:05.329 ******** 2026-03-07 01:01:22.312991 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-07 01:01:22.312995 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-07 01:01:22.313000 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-07 01:01:22.313004 | orchestrator | 2026-03-07 01:01:22.313009 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-07 01:01:22.313014 | orchestrator | Saturday 07 March 2026 00:59:10 +0000 (0:00:00.692) 0:00:06.022 ******** 2026-03-07 01:01:22.313019 | orchestrator | ok: [testbed-node-3] 2026-03-07 01:01:22.313023 | orchestrator | ok: [testbed-node-4] 2026-03-07 01:01:22.313027 | orchestrator | ok: [testbed-node-5] 2026-03-07 01:01:22.313032 | orchestrator | 2026-03-07 01:01:22.313036 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-07 01:01:22.313041 | orchestrator | Saturday 07 March 2026 00:59:11 +0000 (0:00:00.521) 0:00:06.543 ******** 2026-03-07 01:01:22.313045 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-07 01:01:22.313050 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-07 01:01:22.313054 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-07 01:01:22.313059 | orchestrator | 2026-03-07 01:01:22.313063 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-07 01:01:22.313067 | orchestrator | Saturday 07 March 2026 00:59:13 +0000 (0:00:02.280) 0:00:08.823 ******** 2026-03-07 01:01:22.313072 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-07 01:01:22.313077 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-07 01:01:22.313081 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-07 01:01:22.313097 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:01:22.313102 | orchestrator | 2026-03-07 01:01:22.313117 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-07 01:01:22.313122 | orchestrator | Saturday 07 March 2026 00:59:14 +0000 (0:00:00.739) 0:00:09.563 ******** 2026-03-07 01:01:22.313134 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-07 01:01:22.313141 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-07 01:01:22.313146 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-07 01:01:22.313150 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:01:22.313155 | orchestrator | 2026-03-07 01:01:22.313159 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-07 01:01:22.313164 | orchestrator | Saturday 07 March 2026 00:59:15 +0000 (0:00:00.931) 0:00:10.494 ******** 2026-03-07 01:01:22.313170 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:22.313176 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:22.313379 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:22.313395 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:01:22.313402 | orchestrator | 2026-03-07 01:01:22.313409 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-07 01:01:22.313416 | orchestrator | Saturday 07 March 2026 00:59:15 +0000 (0:00:00.507) 0:00:11.002 ******** 2026-03-07 01:01:22.313425 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'bc24821e3303', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-07 00:59:11.847351', 'end': '2026-03-07 00:59:11.903918', 'delta': '0:00:00.056567', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['bc24821e3303'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-07 01:01:22.313441 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'b10551ade19b', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-07 00:59:12.642286', 'end': '2026-03-07 00:59:12.689192', 'delta': '0:00:00.046906', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b10551ade19b'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-07 01:01:22.313482 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '1d78d4264b00', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-07 00:59:13.226139', 'end': '2026-03-07 00:59:13.271947', 'delta': '0:00:00.045808', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1d78d4264b00'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-07 01:01:22.313490 | orchestrator | 2026-03-07 01:01:22.313497 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-07 01:01:22.313504 | orchestrator | Saturday 07 March 2026 00:59:15 +0000 (0:00:00.195) 0:00:11.197 ******** 2026-03-07 01:01:22.313511 | orchestrator | ok: [testbed-node-3] 2026-03-07 01:01:22.313518 | orchestrator | ok: [testbed-node-4] 2026-03-07 01:01:22.313524 | orchestrator | ok: [testbed-node-5] 2026-03-07 01:01:22.313531 | orchestrator | 2026-03-07 01:01:22.313538 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-07 01:01:22.313544 | orchestrator | Saturday 07 March 2026 00:59:16 +0000 (0:00:00.480) 0:00:11.678 ******** 2026-03-07 01:01:22.313551 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-03-07 01:01:22.313558 | orchestrator | 2026-03-07 01:01:22.313565 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-07 01:01:22.313572 | orchestrator | Saturday 07 March 2026 00:59:18 +0000 (0:00:01.970) 0:00:13.648 ******** 2026-03-07 01:01:22.313579 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:01:22.313585 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:01:22.313593 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:01:22.313599 | orchestrator | 2026-03-07 01:01:22.313606 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-07 01:01:22.313613 | orchestrator | Saturday 07 March 2026 00:59:18 +0000 (0:00:00.333) 0:00:13.982 ******** 2026-03-07 01:01:22.313620 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:01:22.313627 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:01:22.313633 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:01:22.313640 | orchestrator | 2026-03-07 01:01:22.313676 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-07 01:01:22.313684 | orchestrator | Saturday 07 March 2026 00:59:19 +0000 (0:00:00.451) 0:00:14.433 ******** 2026-03-07 01:01:22.313691 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:01:22.313697 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:01:22.313704 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:01:22.313711 | orchestrator | 2026-03-07 01:01:22.313717 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-07 01:01:22.313724 | orchestrator | Saturday 07 March 2026 00:59:19 +0000 (0:00:00.581) 0:00:15.015 ******** 2026-03-07 01:01:22.313731 | orchestrator | ok: [testbed-node-3] 2026-03-07 01:01:22.313738 | orchestrator | 2026-03-07 01:01:22.313756 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-07 01:01:22.313763 | orchestrator | Saturday 07 March 2026 00:59:19 +0000 (0:00:00.131) 0:00:15.147 ******** 2026-03-07 01:01:22.313770 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:01:22.313777 | orchestrator | 2026-03-07 01:01:22.313783 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-07 01:01:22.313790 | orchestrator | Saturday 07 March 2026 00:59:20 +0000 (0:00:00.272) 0:00:15.419 ******** 2026-03-07 01:01:22.313803 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:01:22.313810 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:01:22.313817 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:01:22.313823 | orchestrator | 2026-03-07 01:01:22.313830 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-07 01:01:22.313837 | orchestrator | Saturday 07 March 2026 00:59:20 +0000 (0:00:00.373) 0:00:15.793 ******** 2026-03-07 01:01:22.313844 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:01:22.313851 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:01:22.313857 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:01:22.313864 | orchestrator | 2026-03-07 01:01:22.313871 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-07 01:01:22.313878 | orchestrator | Saturday 07 March 2026 00:59:20 +0000 (0:00:00.356) 0:00:16.150 ******** 2026-03-07 01:01:22.313884 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:01:22.313891 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:01:22.313898 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:01:22.313904 | orchestrator | 2026-03-07 01:01:22.313911 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-07 01:01:22.313918 | orchestrator | Saturday 07 March 2026 00:59:21 +0000 (0:00:00.576) 0:00:16.726 ******** 2026-03-07 01:01:22.313940 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:01:22.313946 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:01:22.313953 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:01:22.313959 | orchestrator | 2026-03-07 01:01:22.313965 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-07 01:01:22.313971 | orchestrator | Saturday 07 March 2026 00:59:21 +0000 (0:00:00.423) 0:00:17.149 ******** 2026-03-07 01:01:22.313977 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:01:22.313984 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:01:22.313990 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:01:22.313997 | orchestrator | 2026-03-07 01:01:22.314005 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-07 01:01:22.314050 | orchestrator | Saturday 07 March 2026 00:59:22 +0000 (0:00:00.358) 0:00:17.508 ******** 2026-03-07 01:01:22.314060 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:01:22.314074 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:01:22.314081 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:01:22.314126 | orchestrator | 2026-03-07 01:01:22.314133 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-07 01:01:22.314138 | orchestrator | Saturday 07 March 2026 00:59:22 +0000 (0:00:00.378) 0:00:17.886 ******** 2026-03-07 01:01:22.314143 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:01:22.314147 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:01:22.314151 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:01:22.314156 | orchestrator | 2026-03-07 01:01:22.314160 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-07 01:01:22.314164 | orchestrator | Saturday 07 March 2026 00:59:23 +0000 (0:00:00.550) 0:00:18.437 ******** 2026-03-07 01:01:22.314170 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3529c73b--8337--5a09--bb85--f9958b3a6115-osd--block--3529c73b--8337--5a09--bb85--f9958b3a6115', 'dm-uuid-LVM-G0E8Zuq5yuVlrHw9a1He7gOIdUDQ5vRvDav2cdc2yUdKDp0kFFHzFFNxbbbIx2cl'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-07 01:01:22.314176 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5644fa9a--696a--5a4b--ae2f--cbc58e712aba-osd--block--5644fa9a--696a--5a4b--ae2f--cbc58e712aba', 'dm-uuid-LVM-dhDW2UCAexsGjSiFebxoizRulGlPS4gKsFjbX1boFDhq9isN1VoVpNR4Bh2837W9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-07 01:01:22.314186 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 01:01:22.314192 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 01:01:22.314196 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 01:01:22.314201 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 01:01:22.314205 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 01:01:22.314229 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 01:01:22.314235 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 01:01:22.314240 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 01:01:22.314244 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--030f8481--3d62--5800--8c17--c22bf68268ab-osd--block--030f8481--3d62--5800--8c17--c22bf68268ab', 'dm-uuid-LVM-ytYYAfTHI2JJN8pIptvTymOsYYxl2nsKzc808rdq6y5Gdjzh4bduZ7BnCE3GrxqB'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-07 01:01:22.314255 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e6318f9-11cb-4ed8-b0fb-e89153e65f2e', 'scsi-SQEMU_QEMU_HARDDISK_9e6318f9-11cb-4ed8-b0fb-e89153e65f2e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e6318f9-11cb-4ed8-b0fb-e89153e65f2e-part1', 'scsi-SQEMU_QEMU_HARDDISK_9e6318f9-11cb-4ed8-b0fb-e89153e65f2e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e6318f9-11cb-4ed8-b0fb-e89153e65f2e-part14', 'scsi-SQEMU_QEMU_HARDDISK_9e6318f9-11cb-4ed8-b0fb-e89153e65f2e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e6318f9-11cb-4ed8-b0fb-e89153e65f2e-part15', 'scsi-SQEMU_QEMU_HARDDISK_9e6318f9-11cb-4ed8-b0fb-e89153e65f2e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e6318f9-11cb-4ed8-b0fb-e89153e65f2e-part16', 'scsi-SQEMU_QEMU_HARDDISK_9e6318f9-11cb-4ed8-b0fb-e89153e65f2e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 01:01:22.314290 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8595c920--fb8d--5336--8a83--206e7467f719-osd--block--8595c920--fb8d--5336--8a83--206e7467f719', 'dm-uuid-LVM-MKoDmalCC26sY8T7Ia0Pupmb1laUrtAAMsccR57JvJ9Pcl0lp5RHWRcj8Ify5bAW'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-07 01:01:22.314295 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--3529c73b--8337--5a09--bb85--f9958b3a6115-osd--block--3529c73b--8337--5a09--bb85--f9958b3a6115'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-S02Erf-lc84-aEKo-iaps-RrwA-neru-0Ilncq', 'scsi-0QEMU_QEMU_HARDDISK_d799a894-5671-421e-939f-d4a49d05b62b', 'scsi-SQEMU_QEMU_HARDDISK_d799a894-5671-421e-939f-d4a49d05b62b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 01:01:22.314305 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 01:01:22.314309 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--5644fa9a--696a--5a4b--ae2f--cbc58e712aba-osd--block--5644fa9a--696a--5a4b--ae2f--cbc58e712aba'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-e6byjw-4raU-qrnL-AWeA-GErv-hIhn-F6rGTE', 'scsi-0QEMU_QEMU_HARDDISK_beed34ce-a5a1-4e0c-b446-348e6964ce68', 'scsi-SQEMU_QEMU_HARDDISK_beed34ce-a5a1-4e0c-b446-348e6964ce68'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 01:01:22.314314 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 01:01:22.314319 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99318194-5870-4346-ba42-ca8c5b557f89', 'scsi-SQEMU_QEMU_HARDDISK_99318194-5870-4346-ba42-ca8c5b557f89'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 01:01:22.314324 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 01:01:22.314346 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-07-00-02-40-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 01:01:22.314352 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 01:01:22.314357 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:01:22.314362 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 01:01:22.314374 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 01:01:22.314379 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 01:01:22.314383 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 01:01:22.314395 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e807b00a-8b7b-48ce-9460-0e3636b06250', 'scsi-SQEMU_QEMU_HARDDISK_e807b00a-8b7b-48ce-9460-0e3636b06250'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e807b00a-8b7b-48ce-9460-0e3636b06250-part1', 'scsi-SQEMU_QEMU_HARDDISK_e807b00a-8b7b-48ce-9460-0e3636b06250-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e807b00a-8b7b-48ce-9460-0e3636b06250-part14', 'scsi-SQEMU_QEMU_HARDDISK_e807b00a-8b7b-48ce-9460-0e3636b06250-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e807b00a-8b7b-48ce-9460-0e3636b06250-part15', 'scsi-SQEMU_QEMU_HARDDISK_e807b00a-8b7b-48ce-9460-0e3636b06250-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e807b00a-8b7b-48ce-9460-0e3636b06250-part16', 'scsi-SQEMU_QEMU_HARDDISK_e807b00a-8b7b-48ce-9460-0e3636b06250-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 01:01:22.314401 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--030f8481--3d62--5800--8c17--c22bf68268ab-osd--block--030f8481--3d62--5800--8c17--c22bf68268ab'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tJMgzR-FH3c-VJN8-t3LR-mjCg-cB1e-k3f88q', 'scsi-0QEMU_QEMU_HARDDISK_af4ba259-cb6f-4fcf-8c2a-944dae969065', 'scsi-SQEMU_QEMU_HARDDISK_af4ba259-cb6f-4fcf-8c2a-944dae969065'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 01:01:22.314410 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6dc70d00--a24c--54e3--88f7--ca23e2f9592d-osd--block--6dc70d00--a24c--54e3--88f7--ca23e2f9592d', 'dm-uuid-LVM-jkgqALCR248QwEh8evGRjlqVGySWdBdbNaJY1aOUgbEjt6zlhDkXD7FZlYZulSsu'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-07 01:01:22.314414 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--8595c920--fb8d--5336--8a83--206e7467f719-osd--block--8595c920--fb8d--5336--8a83--206e7467f719'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vkkAYW-kdqr-YzMM-mBoy-jz1M-mFAH-9eEkCi', 'scsi-0QEMU_QEMU_HARDDISK_3c3014e6-40a5-4340-97e1-b63d744f1dc5', 'scsi-SQEMU_QEMU_HARDDISK_3c3014e6-40a5-4340-97e1-b63d744f1dc5'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 01:01:22.314419 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3960461f--aa79--5447--98f8--9395cd95d2e3-osd--block--3960461f--aa79--5447--98f8--9395cd95d2e3', 'dm-uuid-LVM-Iqc4EXlTAo7kndsl0bo8MAuKJ1GjlGC0u2vyA0SVFnmeD66qOH2yKLG7OUWO7NHS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-07 01:01:22.314424 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fdacbb0-1c31-482c-97c6-063a331da0fc', 'scsi-SQEMU_QEMU_HARDDISK_4fdacbb0-1c31-482c-97c6-063a331da0fc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 01:01:22.314439 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 01:01:22.314444 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-07-00-02-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 01:01:22.314451 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 01:01:22.314455 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:01:22.314460 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 01:01:22.314465 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 01:01:22.314470 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 01:01:22.314474 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 01:01:22.314479 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 01:01:22.314483 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-07 01:01:22.314496 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea170c0f-a027-4120-b295-61114d65555d', 'scsi-SQEMU_QEMU_HARDDISK_ea170c0f-a027-4120-b295-61114d65555d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea170c0f-a027-4120-b295-61114d65555d-part1', 'scsi-SQEMU_QEMU_HARDDISK_ea170c0f-a027-4120-b295-61114d65555d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea170c0f-a027-4120-b295-61114d65555d-part14', 'scsi-SQEMU_QEMU_HARDDISK_ea170c0f-a027-4120-b295-61114d65555d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea170c0f-a027-4120-b295-61114d65555d-part15', 'scsi-SQEMU_QEMU_HARDDISK_ea170c0f-a027-4120-b295-61114d65555d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea170c0f-a027-4120-b295-61114d65555d-part16', 'scsi-SQEMU_QEMU_HARDDISK_ea170c0f-a027-4120-b295-61114d65555d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 01:01:22.314504 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--6dc70d00--a24c--54e3--88f7--ca23e2f9592d-osd--block--6dc70d00--a24c--54e3--88f7--ca23e2f9592d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ZaZ5JV-te9Q-ux0A-aq6c-OwVe-IKBo-dM6h9H', 'scsi-0QEMU_QEMU_HARDDISK_fc232e22-cf7b-4f47-aee0-37a45820ed30', 'scsi-SQEMU_QEMU_HARDDISK_fc232e22-cf7b-4f47-aee0-37a45820ed30'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 01:01:22.314509 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--3960461f--aa79--5447--98f8--9395cd95d2e3-osd--block--3960461f--aa79--5447--98f8--9395cd95d2e3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-TpUQJo-P6aT-RbXI-AWtd-Rfbr-me5S-2vqAGd', 'scsi-0QEMU_QEMU_HARDDISK_81cf8acf-ab0c-4c96-8ca2-b696b28e7835', 'scsi-SQEMU_QEMU_HARDDISK_81cf8acf-ab0c-4c96-8ca2-b696b28e7835'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 01:01:22.314514 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fa6ba2e8-2fa1-4496-a2d0-ef7dd6ca5952', 'scsi-SQEMU_QEMU_HARDDISK_fa6ba2e8-2fa1-4496-a2d0-ef7dd6ca5952'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 01:01:22.314525 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-07-00-02-36-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-07 01:01:22.314534 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:01:22.314538 | orchestrator | 2026-03-07 01:01:22.314543 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-07 01:01:22.314547 | orchestrator | Saturday 07 March 2026 00:59:23 +0000 (0:00:00.594) 0:00:19.032 ******** 2026-03-07 01:01:22.314553 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3529c73b--8337--5a09--bb85--f9958b3a6115-osd--block--3529c73b--8337--5a09--bb85--f9958b3a6115', 'dm-uuid-LVM-G0E8Zuq5yuVlrHw9a1He7gOIdUDQ5vRvDav2cdc2yUdKDp0kFFHzFFNxbbbIx2cl'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:22.314559 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5644fa9a--696a--5a4b--ae2f--cbc58e712aba-osd--block--5644fa9a--696a--5a4b--ae2f--cbc58e712aba', 'dm-uuid-LVM-dhDW2UCAexsGjSiFebxoizRulGlPS4gKsFjbX1boFDhq9isN1VoVpNR4Bh2837W9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:22.314563 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:22.314568 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:22.314572 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:22.314588 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:22.314593 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:22.314598 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:22.314602 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:22.314607 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:22.314611 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--030f8481--3d62--5800--8c17--c22bf68268ab-osd--block--030f8481--3d62--5800--8c17--c22bf68268ab', 'dm-uuid-LVM-ytYYAfTHI2JJN8pIptvTymOsYYxl2nsKzc808rdq6y5Gdjzh4bduZ7BnCE3GrxqB'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:22.314625 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e6318f9-11cb-4ed8-b0fb-e89153e65f2e', 'scsi-SQEMU_QEMU_HARDDISK_9e6318f9-11cb-4ed8-b0fb-e89153e65f2e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e6318f9-11cb-4ed8-b0fb-e89153e65f2e-part1', 'scsi-SQEMU_QEMU_HARDDISK_9e6318f9-11cb-4ed8-b0fb-e89153e65f2e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e6318f9-11cb-4ed8-b0fb-e89153e65f2e-part14', 'scsi-SQEMU_QEMU_HARDDISK_9e6318f9-11cb-4ed8-b0fb-e89153e65f2e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e6318f9-11cb-4ed8-b0fb-e89153e65f2e-part15', 'scsi-SQEMU_QEMU_HARDDISK_9e6318f9-11cb-4ed8-b0fb-e89153e65f2e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9e6318f9-11cb-4ed8-b0fb-e89153e65f2e-part16', 'scsi-SQEMU_QEMU_HARDDISK_9e6318f9-11cb-4ed8-b0fb-e89153e65f2e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:22.314634 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8595c920--fb8d--5336--8a83--206e7467f719-osd--block--8595c920--fb8d--5336--8a83--206e7467f719', 'dm-uuid-LVM-MKoDmalCC26sY8T7Ia0Pupmb1laUrtAAMsccR57JvJ9Pcl0lp5RHWRcj8Ify5bAW'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:22.314639 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--3529c73b--8337--5a09--bb85--f9958b3a6115-osd--block--3529c73b--8337--5a09--bb85--f9958b3a6115'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-S02Erf-lc84-aEKo-iaps-RrwA-neru-0Ilncq', 'scsi-0QEMU_QEMU_HARDDISK_d799a894-5671-421e-939f-d4a49d05b62b', 'scsi-SQEMU_QEMU_HARDDISK_d799a894-5671-421e-939f-d4a49d05b62b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:22.314648 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:22.314656 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--5644fa9a--696a--5a4b--ae2f--cbc58e712aba-osd--block--5644fa9a--696a--5a4b--ae2f--cbc58e712aba'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-e6byjw-4raU-qrnL-AWeA-GErv-hIhn-F6rGTE', 'scsi-0QEMU_QEMU_HARDDISK_beed34ce-a5a1-4e0c-b446-348e6964ce68', 'scsi-SQEMU_QEMU_HARDDISK_beed34ce-a5a1-4e0c-b446-348e6964ce68'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:22.314661 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:22.314665 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99318194-5870-4346-ba42-ca8c5b557f89', 'scsi-SQEMU_QEMU_HARDDISK_99318194-5870-4346-ba42-ca8c5b557f89'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:22.314670 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:22.314675 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-07-00-02-40-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:22.314691 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:22.314696 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:22.314701 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:22.314705 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:22.314710 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:22.314721 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e807b00a-8b7b-48ce-9460-0e3636b06250', 'scsi-SQEMU_QEMU_HARDDISK_e807b00a-8b7b-48ce-9460-0e3636b06250'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e807b00a-8b7b-48ce-9460-0e3636b06250-part1', 'scsi-SQEMU_QEMU_HARDDISK_e807b00a-8b7b-48ce-9460-0e3636b06250-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e807b00a-8b7b-48ce-9460-0e3636b06250-part14', 'scsi-SQEMU_QEMU_HARDDISK_e807b00a-8b7b-48ce-9460-0e3636b06250-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e807b00a-8b7b-48ce-9460-0e3636b06250-part15', 'scsi-SQEMU_QEMU_HARDDISK_e807b00a-8b7b-48ce-9460-0e3636b06250-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e807b00a-8b7b-48ce-9460-0e3636b06250-part16', 'scsi-SQEMU_QEMU_HARDDISK_e807b00a-8b7b-48ce-9460-0e3636b06250-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:22.314730 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--030f8481--3d62--5800--8c17--c22bf68268ab-osd--block--030f8481--3d62--5800--8c17--c22bf68268ab'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tJMgzR-FH3c-VJN8-t3LR-mjCg-cB1e-k3f88q', 'scsi-0QEMU_QEMU_HARDDISK_af4ba259-cb6f-4fcf-8c2a-944dae969065', 'scsi-SQEMU_QEMU_HARDDISK_af4ba259-cb6f-4fcf-8c2a-944dae969065'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:22.314735 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--8595c920--fb8d--5336--8a83--206e7467f719-osd--block--8595c920--fb8d--5336--8a83--206e7467f719'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vkkAYW-kdqr-YzMM-mBoy-jz1M-mFAH-9eEkCi', 'scsi-0QEMU_QEMU_HARDDISK_3c3014e6-40a5-4340-97e1-b63d744f1dc5', 'scsi-SQEMU_QEMU_HARDDISK_3c3014e6-40a5-4340-97e1-b63d744f1dc5'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:22.314739 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fdacbb0-1c31-482c-97c6-063a331da0fc', 'scsi-SQEMU_QEMU_HARDDISK_4fdacbb0-1c31-482c-97c6-063a331da0fc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:22.314753 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-07-00-02-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:22.314758 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:01:22.314763 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:01:22.314767 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6dc70d00--a24c--54e3--88f7--ca23e2f9592d-osd--block--6dc70d00--a24c--54e3--88f7--ca23e2f9592d', 'dm-uuid-LVM-jkgqALCR248QwEh8evGRjlqVGySWdBdbNaJY1aOUgbEjt6zlhDkXD7FZlYZulSsu'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:22.314772 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3960461f--aa79--5447--98f8--9395cd95d2e3-osd--block--3960461f--aa79--5447--98f8--9395cd95d2e3', 'dm-uuid-LVM-Iqc4EXlTAo7kndsl0bo8MAuKJ1GjlGC0u2vyA0SVFnmeD66qOH2yKLG7OUWO7NHS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:22.314777 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:22.314781 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:22.314789 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:22.314801 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:22.314806 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:22.314811 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:22.314815 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:22.314820 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:22.314834 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea170c0f-a027-4120-b295-61114d65555d', 'scsi-SQEMU_QEMU_HARDDISK_ea170c0f-a027-4120-b295-61114d65555d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea170c0f-a027-4120-b295-61114d65555d-part1', 'scsi-SQEMU_QEMU_HARDDISK_ea170c0f-a027-4120-b295-61114d65555d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea170c0f-a027-4120-b295-61114d65555d-part14', 'scsi-SQEMU_QEMU_HARDDISK_ea170c0f-a027-4120-b295-61114d65555d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea170c0f-a027-4120-b295-61114d65555d-part15', 'scsi-SQEMU_QEMU_HARDDISK_ea170c0f-a027-4120-b295-61114d65555d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea170c0f-a027-4120-b295-61114d65555d-part16', 'scsi-SQEMU_QEMU_HARDDISK_ea170c0f-a027-4120-b295-61114d65555d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:22.314839 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--6dc70d00--a24c--54e3--88f7--ca23e2f9592d-osd--block--6dc70d00--a24c--54e3--88f7--ca23e2f9592d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ZaZ5JV-te9Q-ux0A-aq6c-OwVe-IKBo-dM6h9H', 'scsi-0QEMU_QEMU_HARDDISK_fc232e22-cf7b-4f47-aee0-37a45820ed30', 'scsi-SQEMU_QEMU_HARDDISK_fc232e22-cf7b-4f47-aee0-37a45820ed30'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:22.314844 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--3960461f--aa79--5447--98f8--9395cd95d2e3-osd--block--3960461f--aa79--5447--98f8--9395cd95d2e3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-TpUQJo-P6aT-RbXI-AWtd-Rfbr-me5S-2vqAGd', 'scsi-0QEMU_QEMU_HARDDISK_81cf8acf-ab0c-4c96-8ca2-b696b28e7835', 'scsi-SQEMU_QEMU_HARDDISK_81cf8acf-ab0c-4c96-8ca2-b696b28e7835'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:22.314852 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fa6ba2e8-2fa1-4496-a2d0-ef7dd6ca5952', 'scsi-SQEMU_QEMU_HARDDISK_fa6ba2e8-2fa1-4496-a2d0-ef7dd6ca5952'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:22.314860 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-07-00-02-36-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-07 01:01:22.314865 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:01:22.314870 | orchestrator | 2026-03-07 01:01:22.314875 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-07 01:01:22.314879 | orchestrator | Saturday 07 March 2026 00:59:24 +0000 (0:00:00.770) 0:00:19.802 ******** 2026-03-07 01:01:22.314884 | orchestrator | ok: [testbed-node-3] 2026-03-07 01:01:22.314889 | orchestrator | ok: [testbed-node-4] 2026-03-07 01:01:22.314893 | orchestrator | ok: [testbed-node-5] 2026-03-07 01:01:22.314897 | orchestrator | 2026-03-07 01:01:22.314902 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-07 01:01:22.314906 | orchestrator | Saturday 07 March 2026 00:59:25 +0000 (0:00:00.729) 0:00:20.532 ******** 2026-03-07 01:01:22.314911 | orchestrator | ok: [testbed-node-3] 2026-03-07 01:01:22.314915 | orchestrator | ok: [testbed-node-4] 2026-03-07 01:01:22.314919 | orchestrator | ok: [testbed-node-5] 2026-03-07 01:01:22.314923 | orchestrator | 2026-03-07 01:01:22.314956 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-07 01:01:22.314964 | orchestrator | Saturday 07 March 2026 00:59:25 +0000 (0:00:00.688) 0:00:21.220 ******** 2026-03-07 01:01:22.314968 | orchestrator | ok: [testbed-node-3] 2026-03-07 01:01:22.314973 | orchestrator | ok: [testbed-node-4] 2026-03-07 01:01:22.314977 | orchestrator | ok: [testbed-node-5] 2026-03-07 01:01:22.314981 | orchestrator | 2026-03-07 01:01:22.314985 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-07 01:01:22.314990 | orchestrator | Saturday 07 March 2026 00:59:26 +0000 (0:00:00.741) 0:00:21.961 ******** 2026-03-07 01:01:22.314994 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:01:22.314998 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:01:22.315003 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:01:22.315007 | orchestrator | 2026-03-07 01:01:22.315011 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-07 01:01:22.315016 | orchestrator | Saturday 07 March 2026 00:59:26 +0000 (0:00:00.351) 0:00:22.313 ******** 2026-03-07 01:01:22.315020 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:01:22.315024 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:01:22.315028 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:01:22.315037 | orchestrator | 2026-03-07 01:01:22.315041 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-07 01:01:22.315046 | orchestrator | Saturday 07 March 2026 00:59:27 +0000 (0:00:00.503) 0:00:22.817 ******** 2026-03-07 01:01:22.315050 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:01:22.315054 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:01:22.315059 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:01:22.315063 | orchestrator | 2026-03-07 01:01:22.315067 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-07 01:01:22.315072 | orchestrator | Saturday 07 March 2026 00:59:28 +0000 (0:00:00.610) 0:00:23.428 ******** 2026-03-07 01:01:22.315076 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-07 01:01:22.315081 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-07 01:01:22.315085 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-07 01:01:22.315089 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-07 01:01:22.315094 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-07 01:01:22.315098 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-07 01:01:22.315102 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-07 01:01:22.315107 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-07 01:01:22.315111 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-07 01:01:22.315115 | orchestrator | 2026-03-07 01:01:22.315120 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-07 01:01:22.315124 | orchestrator | Saturday 07 March 2026 00:59:28 +0000 (0:00:00.899) 0:00:24.327 ******** 2026-03-07 01:01:22.315128 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-07 01:01:22.315133 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-07 01:01:22.315137 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-07 01:01:22.315142 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:01:22.315149 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-07 01:01:22.315155 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-07 01:01:22.315162 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-07 01:01:22.315169 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:01:22.315176 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-07 01:01:22.315183 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-07 01:01:22.315189 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-07 01:01:22.315196 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:01:22.315203 | orchestrator | 2026-03-07 01:01:22.315210 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-07 01:01:22.315217 | orchestrator | Saturday 07 March 2026 00:59:29 +0000 (0:00:00.405) 0:00:24.732 ******** 2026-03-07 01:01:22.315224 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 01:01:22.315231 | orchestrator | 2026-03-07 01:01:22.315238 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-07 01:01:22.315250 | orchestrator | Saturday 07 March 2026 00:59:30 +0000 (0:00:00.790) 0:00:25.523 ******** 2026-03-07 01:01:22.315261 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:01:22.315268 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:01:22.315275 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:01:22.315282 | orchestrator | 2026-03-07 01:01:22.315288 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-07 01:01:22.315295 | orchestrator | Saturday 07 March 2026 00:59:30 +0000 (0:00:00.415) 0:00:25.938 ******** 2026-03-07 01:01:22.315302 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:01:22.315308 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:01:22.315315 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:01:22.315326 | orchestrator | 2026-03-07 01:01:22.315333 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-07 01:01:22.315340 | orchestrator | Saturday 07 March 2026 00:59:30 +0000 (0:00:00.362) 0:00:26.301 ******** 2026-03-07 01:01:22.315346 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:01:22.315353 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:01:22.315359 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:01:22.315366 | orchestrator | 2026-03-07 01:01:22.315373 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-07 01:01:22.315380 | orchestrator | Saturday 07 March 2026 00:59:31 +0000 (0:00:00.351) 0:00:26.653 ******** 2026-03-07 01:01:22.315386 | orchestrator | ok: [testbed-node-3] 2026-03-07 01:01:22.315393 | orchestrator | ok: [testbed-node-4] 2026-03-07 01:01:22.315400 | orchestrator | ok: [testbed-node-5] 2026-03-07 01:01:22.315406 | orchestrator | 2026-03-07 01:01:22.315413 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-07 01:01:22.315420 | orchestrator | Saturday 07 March 2026 00:59:31 +0000 (0:00:00.666) 0:00:27.320 ******** 2026-03-07 01:01:22.315426 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-07 01:01:22.315433 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-07 01:01:22.315440 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-07 01:01:22.315446 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:01:22.315453 | orchestrator | 2026-03-07 01:01:22.315460 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-07 01:01:22.315466 | orchestrator | Saturday 07 March 2026 00:59:32 +0000 (0:00:00.399) 0:00:27.720 ******** 2026-03-07 01:01:22.315473 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-07 01:01:22.315480 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-07 01:01:22.315487 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-07 01:01:22.315494 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:01:22.315500 | orchestrator | 2026-03-07 01:01:22.315507 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-07 01:01:22.315514 | orchestrator | Saturday 07 March 2026 00:59:32 +0000 (0:00:00.388) 0:00:28.108 ******** 2026-03-07 01:01:22.315520 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-07 01:01:22.315527 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-07 01:01:22.315534 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-07 01:01:22.315540 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:01:22.315547 | orchestrator | 2026-03-07 01:01:22.315554 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-07 01:01:22.315561 | orchestrator | Saturday 07 March 2026 00:59:33 +0000 (0:00:00.453) 0:00:28.561 ******** 2026-03-07 01:01:22.315568 | orchestrator | ok: [testbed-node-3] 2026-03-07 01:01:22.315574 | orchestrator | ok: [testbed-node-4] 2026-03-07 01:01:22.315581 | orchestrator | ok: [testbed-node-5] 2026-03-07 01:01:22.315587 | orchestrator | 2026-03-07 01:01:22.315594 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-07 01:01:22.315601 | orchestrator | Saturday 07 March 2026 00:59:33 +0000 (0:00:00.408) 0:00:28.969 ******** 2026-03-07 01:01:22.315608 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-07 01:01:22.315615 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-07 01:01:22.315621 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-07 01:01:22.315628 | orchestrator | 2026-03-07 01:01:22.315635 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-07 01:01:22.315642 | orchestrator | Saturday 07 March 2026 00:59:34 +0000 (0:00:00.600) 0:00:29.570 ******** 2026-03-07 01:01:22.315648 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-07 01:01:22.315656 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-07 01:01:22.315671 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-07 01:01:22.315678 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-07 01:01:22.315685 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-07 01:01:22.315692 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-07 01:01:22.315699 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-07 01:01:22.315706 | orchestrator | 2026-03-07 01:01:22.315714 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-07 01:01:22.315718 | orchestrator | Saturday 07 March 2026 00:59:35 +0000 (0:00:01.132) 0:00:30.702 ******** 2026-03-07 01:01:22.315723 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-07 01:01:22.315727 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-07 01:01:22.315731 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-07 01:01:22.315736 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-07 01:01:22.315740 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-07 01:01:22.315747 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-07 01:01:22.315755 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-07 01:01:22.315760 | orchestrator | 2026-03-07 01:01:22.315764 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-03-07 01:01:22.315768 | orchestrator | Saturday 07 March 2026 00:59:37 +0000 (0:00:02.243) 0:00:32.946 ******** 2026-03-07 01:01:22.315773 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:01:22.315777 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:01:22.315782 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-03-07 01:01:22.315786 | orchestrator | 2026-03-07 01:01:22.315790 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-03-07 01:01:22.315794 | orchestrator | Saturday 07 March 2026 00:59:38 +0000 (0:00:00.462) 0:00:33.408 ******** 2026-03-07 01:01:22.315800 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-07 01:01:22.315806 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-07 01:01:22.315810 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-07 01:01:22.315815 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-07 01:01:22.315819 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-07 01:01:22.315823 | orchestrator | 2026-03-07 01:01:22.315828 | orchestrator | TASK [generate keys] *********************************************************** 2026-03-07 01:01:22.315836 | orchestrator | Saturday 07 March 2026 01:00:24 +0000 (0:00:46.864) 0:01:20.273 ******** 2026-03-07 01:01:22.315840 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-07 01:01:22.315844 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-07 01:01:22.315849 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-07 01:01:22.315853 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-07 01:01:22.315857 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-07 01:01:22.315862 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-07 01:01:22.315866 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-03-07 01:01:22.315870 | orchestrator | 2026-03-07 01:01:22.315874 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-03-07 01:01:22.315879 | orchestrator | Saturday 07 March 2026 01:00:50 +0000 (0:00:25.150) 0:01:45.423 ******** 2026-03-07 01:01:22.315883 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-07 01:01:22.315887 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-07 01:01:22.315891 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-07 01:01:22.315896 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-07 01:01:22.315900 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-07 01:01:22.315904 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-07 01:01:22.315909 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-07 01:01:22.315913 | orchestrator | 2026-03-07 01:01:22.315917 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-03-07 01:01:22.315921 | orchestrator | Saturday 07 March 2026 01:01:02 +0000 (0:00:12.230) 0:01:57.654 ******** 2026-03-07 01:01:22.315943 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-07 01:01:22.315947 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-07 01:01:22.315952 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-07 01:01:22.315956 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-07 01:01:22.315964 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-07 01:01:22.315971 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-07 01:01:22.315976 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-07 01:01:22.315980 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-07 01:01:22.315985 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-07 01:01:22.315989 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-07 01:01:22.315993 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-07 01:01:22.315998 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-07 01:01:22.316002 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-07 01:01:22.316007 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-07 01:01:22.316011 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-07 01:01:22.316015 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-07 01:01:22.316020 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-07 01:01:22.316028 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-07 01:01:22.316032 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-03-07 01:01:22.316037 | orchestrator | 2026-03-07 01:01:22.316041 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 01:01:22.316046 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-07 01:01:22.316052 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-07 01:01:22.316056 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-07 01:01:22.316061 | orchestrator | 2026-03-07 01:01:22.316065 | orchestrator | 2026-03-07 01:01:22.316069 | orchestrator | 2026-03-07 01:01:22.316074 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 01:01:22.316078 | orchestrator | Saturday 07 March 2026 01:01:20 +0000 (0:00:18.447) 0:02:16.101 ******** 2026-03-07 01:01:22.316083 | orchestrator | =============================================================================== 2026-03-07 01:01:22.316087 | orchestrator | create openstack pool(s) ----------------------------------------------- 46.86s 2026-03-07 01:01:22.316091 | orchestrator | generate keys ---------------------------------------------------------- 25.15s 2026-03-07 01:01:22.316096 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.45s 2026-03-07 01:01:22.316100 | orchestrator | get keys from monitors ------------------------------------------------- 12.23s 2026-03-07 01:01:22.316104 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.28s 2026-03-07 01:01:22.316109 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.24s 2026-03-07 01:01:22.316113 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.97s 2026-03-07 01:01:22.316118 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.13s 2026-03-07 01:01:22.316122 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.93s 2026-03-07 01:01:22.316126 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.90s 2026-03-07 01:01:22.316131 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.87s 2026-03-07 01:01:22.316135 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.79s 2026-03-07 01:01:22.316139 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.77s 2026-03-07 01:01:22.316144 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.74s 2026-03-07 01:01:22.316148 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.74s 2026-03-07 01:01:22.316153 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.73s 2026-03-07 01:01:22.316157 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.72s 2026-03-07 01:01:22.316161 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.69s 2026-03-07 01:01:22.316166 | orchestrator | ceph-facts : Set default osd_pool_default_crush_rule fact --------------- 0.69s 2026-03-07 01:01:22.316170 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.67s 2026-03-07 01:01:22.316175 | orchestrator | 2026-03-07 01:01:22 | INFO  | Task f009412f-3063-4903-abb8-4f4ec4dd6d05 is in state STARTED 2026-03-07 01:01:22.316527 | orchestrator | 2026-03-07 01:01:22 | INFO  | Task ca24e09d-3337-4547-bd9f-afb6c62890da is in state STARTED 2026-03-07 01:01:22.319573 | orchestrator | 2026-03-07 01:01:22 | INFO  | Task c358628d-3ad0-43a1-87a1-21fcaed0179f is in state STARTED 2026-03-07 01:01:22.320367 | orchestrator | 2026-03-07 01:01:22 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:01:25.372869 | orchestrator | 2026-03-07 01:01:25 | INFO  | Task f009412f-3063-4903-abb8-4f4ec4dd6d05 is in state STARTED 2026-03-07 01:01:25.373035 | orchestrator | 2026-03-07 01:01:25 | INFO  | Task ca24e09d-3337-4547-bd9f-afb6c62890da is in state STARTED 2026-03-07 01:01:25.373407 | orchestrator | 2026-03-07 01:01:25 | INFO  | Task c358628d-3ad0-43a1-87a1-21fcaed0179f is in state STARTED 2026-03-07 01:01:25.373427 | orchestrator | 2026-03-07 01:01:25 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:01:28.423460 | orchestrator | 2026-03-07 01:01:28 | INFO  | Task f009412f-3063-4903-abb8-4f4ec4dd6d05 is in state STARTED 2026-03-07 01:01:28.425304 | orchestrator | 2026-03-07 01:01:28 | INFO  | Task ca24e09d-3337-4547-bd9f-afb6c62890da is in state STARTED 2026-03-07 01:01:28.426534 | orchestrator | 2026-03-07 01:01:28 | INFO  | Task c358628d-3ad0-43a1-87a1-21fcaed0179f is in state STARTED 2026-03-07 01:01:28.426573 | orchestrator | 2026-03-07 01:01:28 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:01:31.463252 | orchestrator | 2026-03-07 01:01:31 | INFO  | Task f009412f-3063-4903-abb8-4f4ec4dd6d05 is in state STARTED 2026-03-07 01:01:31.463907 | orchestrator | 2026-03-07 01:01:31 | INFO  | Task ca24e09d-3337-4547-bd9f-afb6c62890da is in state STARTED 2026-03-07 01:01:31.465375 | orchestrator | 2026-03-07 01:01:31 | INFO  | Task c358628d-3ad0-43a1-87a1-21fcaed0179f is in state STARTED 2026-03-07 01:01:31.465422 | orchestrator | 2026-03-07 01:01:31 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:01:34.521745 | orchestrator | 2026-03-07 01:01:34 | INFO  | Task f009412f-3063-4903-abb8-4f4ec4dd6d05 is in state STARTED 2026-03-07 01:01:34.523844 | orchestrator | 2026-03-07 01:01:34 | INFO  | Task ca24e09d-3337-4547-bd9f-afb6c62890da is in state STARTED 2026-03-07 01:01:34.525603 | orchestrator | 2026-03-07 01:01:34 | INFO  | Task c358628d-3ad0-43a1-87a1-21fcaed0179f is in state STARTED 2026-03-07 01:01:34.525652 | orchestrator | 2026-03-07 01:01:34 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:01:37.577145 | orchestrator | 2026-03-07 01:01:37 | INFO  | Task f009412f-3063-4903-abb8-4f4ec4dd6d05 is in state STARTED 2026-03-07 01:01:37.578681 | orchestrator | 2026-03-07 01:01:37 | INFO  | Task ca24e09d-3337-4547-bd9f-afb6c62890da is in state STARTED 2026-03-07 01:01:37.579864 | orchestrator | 2026-03-07 01:01:37 | INFO  | Task c358628d-3ad0-43a1-87a1-21fcaed0179f is in state STARTED 2026-03-07 01:01:37.580075 | orchestrator | 2026-03-07 01:01:37 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:01:40.629071 | orchestrator | 2026-03-07 01:01:40 | INFO  | Task f009412f-3063-4903-abb8-4f4ec4dd6d05 is in state STARTED 2026-03-07 01:01:40.629170 | orchestrator | 2026-03-07 01:01:40 | INFO  | Task ca24e09d-3337-4547-bd9f-afb6c62890da is in state STARTED 2026-03-07 01:01:40.630955 | orchestrator | 2026-03-07 01:01:40 | INFO  | Task c358628d-3ad0-43a1-87a1-21fcaed0179f is in state STARTED 2026-03-07 01:01:40.631010 | orchestrator | 2026-03-07 01:01:40 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:01:43.681198 | orchestrator | 2026-03-07 01:01:43 | INFO  | Task f009412f-3063-4903-abb8-4f4ec4dd6d05 is in state STARTED 2026-03-07 01:01:43.681291 | orchestrator | 2026-03-07 01:01:43 | INFO  | Task ca24e09d-3337-4547-bd9f-afb6c62890da is in state STARTED 2026-03-07 01:01:43.683247 | orchestrator | 2026-03-07 01:01:43 | INFO  | Task c358628d-3ad0-43a1-87a1-21fcaed0179f is in state STARTED 2026-03-07 01:01:43.683310 | orchestrator | 2026-03-07 01:01:43 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:01:46.725694 | orchestrator | 2026-03-07 01:01:46 | INFO  | Task f009412f-3063-4903-abb8-4f4ec4dd6d05 is in state STARTED 2026-03-07 01:01:46.727969 | orchestrator | 2026-03-07 01:01:46 | INFO  | Task ca24e09d-3337-4547-bd9f-afb6c62890da is in state STARTED 2026-03-07 01:01:46.728846 | orchestrator | 2026-03-07 01:01:46 | INFO  | Task c358628d-3ad0-43a1-87a1-21fcaed0179f is in state STARTED 2026-03-07 01:01:46.728877 | orchestrator | 2026-03-07 01:01:46 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:01:49.774457 | orchestrator | 2026-03-07 01:01:49 | INFO  | Task f009412f-3063-4903-abb8-4f4ec4dd6d05 is in state STARTED 2026-03-07 01:01:49.777120 | orchestrator | 2026-03-07 01:01:49 | INFO  | Task ca24e09d-3337-4547-bd9f-afb6c62890da is in state STARTED 2026-03-07 01:01:49.779237 | orchestrator | 2026-03-07 01:01:49 | INFO  | Task c358628d-3ad0-43a1-87a1-21fcaed0179f is in state STARTED 2026-03-07 01:01:49.779682 | orchestrator | 2026-03-07 01:01:49 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:01:52.829137 | orchestrator | 2026-03-07 01:01:52 | INFO  | Task f009412f-3063-4903-abb8-4f4ec4dd6d05 is in state STARTED 2026-03-07 01:01:52.830329 | orchestrator | 2026-03-07 01:01:52 | INFO  | Task ca24e09d-3337-4547-bd9f-afb6c62890da is in state STARTED 2026-03-07 01:01:52.832251 | orchestrator | 2026-03-07 01:01:52 | INFO  | Task c358628d-3ad0-43a1-87a1-21fcaed0179f is in state STARTED 2026-03-07 01:01:52.832297 | orchestrator | 2026-03-07 01:01:52 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:01:55.876206 | orchestrator | 2026-03-07 01:01:55 | INFO  | Task f009412f-3063-4903-abb8-4f4ec4dd6d05 is in state STARTED 2026-03-07 01:01:55.877417 | orchestrator | 2026-03-07 01:01:55 | INFO  | Task ca24e09d-3337-4547-bd9f-afb6c62890da is in state STARTED 2026-03-07 01:01:55.879891 | orchestrator | 2026-03-07 01:01:55 | INFO  | Task c358628d-3ad0-43a1-87a1-21fcaed0179f is in state STARTED 2026-03-07 01:01:55.879927 | orchestrator | 2026-03-07 01:01:55 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:01:58.925769 | orchestrator | 2026-03-07 01:01:58 | INFO  | Task f009412f-3063-4903-abb8-4f4ec4dd6d05 is in state STARTED 2026-03-07 01:01:58.928165 | orchestrator | 2026-03-07 01:01:58 | INFO  | Task ca24e09d-3337-4547-bd9f-afb6c62890da is in state STARTED 2026-03-07 01:01:58.931483 | orchestrator | 2026-03-07 01:01:58 | INFO  | Task c358628d-3ad0-43a1-87a1-21fcaed0179f is in state STARTED 2026-03-07 01:01:58.931547 | orchestrator | 2026-03-07 01:01:58 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:02:01.987121 | orchestrator | 2026-03-07 01:02:01 | INFO  | Task f009412f-3063-4903-abb8-4f4ec4dd6d05 is in state STARTED 2026-03-07 01:02:01.989178 | orchestrator | 2026-03-07 01:02:01 | INFO  | Task ca24e09d-3337-4547-bd9f-afb6c62890da is in state STARTED 2026-03-07 01:02:01.991711 | orchestrator | 2026-03-07 01:02:01 | INFO  | Task c358628d-3ad0-43a1-87a1-21fcaed0179f is in state STARTED 2026-03-07 01:02:01.991775 | orchestrator | 2026-03-07 01:02:01 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:02:05.053298 | orchestrator | 2026-03-07 01:02:05 | INFO  | Task f009412f-3063-4903-abb8-4f4ec4dd6d05 is in state STARTED 2026-03-07 01:02:05.053402 | orchestrator | 2026-03-07 01:02:05 | INFO  | Task e8c16613-ee54-4fe0-9f3e-7f9efe586440 is in state STARTED 2026-03-07 01:02:05.054524 | orchestrator | 2026-03-07 01:02:05 | INFO  | Task ca24e09d-3337-4547-bd9f-afb6c62890da is in state SUCCESS 2026-03-07 01:02:05.055329 | orchestrator | 2026-03-07 01:02:05 | INFO  | Task c358628d-3ad0-43a1-87a1-21fcaed0179f is in state STARTED 2026-03-07 01:02:05.055361 | orchestrator | 2026-03-07 01:02:05 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:02:08.097214 | orchestrator | 2026-03-07 01:02:08 | INFO  | Task f009412f-3063-4903-abb8-4f4ec4dd6d05 is in state STARTED 2026-03-07 01:02:08.098228 | orchestrator | 2026-03-07 01:02:08 | INFO  | Task e8c16613-ee54-4fe0-9f3e-7f9efe586440 is in state STARTED 2026-03-07 01:02:08.100303 | orchestrator | 2026-03-07 01:02:08 | INFO  | Task c358628d-3ad0-43a1-87a1-21fcaed0179f is in state STARTED 2026-03-07 01:02:08.100343 | orchestrator | 2026-03-07 01:02:08 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:02:11.146798 | orchestrator | 2026-03-07 01:02:11 | INFO  | Task f009412f-3063-4903-abb8-4f4ec4dd6d05 is in state STARTED 2026-03-07 01:02:11.147922 | orchestrator | 2026-03-07 01:02:11 | INFO  | Task e8c16613-ee54-4fe0-9f3e-7f9efe586440 is in state STARTED 2026-03-07 01:02:11.148919 | orchestrator | 2026-03-07 01:02:11 | INFO  | Task c358628d-3ad0-43a1-87a1-21fcaed0179f is in state STARTED 2026-03-07 01:02:11.148992 | orchestrator | 2026-03-07 01:02:11 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:02:14.206616 | orchestrator | 2026-03-07 01:02:14 | INFO  | Task f009412f-3063-4903-abb8-4f4ec4dd6d05 is in state STARTED 2026-03-07 01:02:14.207718 | orchestrator | 2026-03-07 01:02:14 | INFO  | Task e8c16613-ee54-4fe0-9f3e-7f9efe586440 is in state STARTED 2026-03-07 01:02:14.209654 | orchestrator | 2026-03-07 01:02:14 | INFO  | Task c358628d-3ad0-43a1-87a1-21fcaed0179f is in state SUCCESS 2026-03-07 01:02:14.210187 | orchestrator | 2026-03-07 01:02:14 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:02:14.212164 | orchestrator | 2026-03-07 01:02:14.212208 | orchestrator | 2026-03-07 01:02:14.212220 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-03-07 01:02:14.212232 | orchestrator | 2026-03-07 01:02:14.212243 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-03-07 01:02:14.212255 | orchestrator | Saturday 07 March 2026 01:01:26 +0000 (0:00:00.187) 0:00:00.187 ******** 2026-03-07 01:02:14.212266 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-07 01:02:14.212297 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-07 01:02:14.212309 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-07 01:02:14.212320 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-07 01:02:14.212331 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-07 01:02:14.212342 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-07 01:02:14.212353 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-07 01:02:14.212363 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-07 01:02:14.212374 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-07 01:02:14.212629 | orchestrator | 2026-03-07 01:02:14.212645 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-03-07 01:02:14.212656 | orchestrator | Saturday 07 March 2026 01:01:31 +0000 (0:00:04.872) 0:00:05.060 ******** 2026-03-07 01:02:14.212667 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-07 01:02:14.212678 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-07 01:02:14.212689 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-07 01:02:14.212700 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-07 01:02:14.212733 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-07 01:02:14.212744 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-07 01:02:14.212755 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-07 01:02:14.212765 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-07 01:02:14.212779 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-07 01:02:14.212798 | orchestrator | 2026-03-07 01:02:14.212816 | orchestrator | TASK [Create share directory] ************************************************** 2026-03-07 01:02:14.212834 | orchestrator | Saturday 07 March 2026 01:01:35 +0000 (0:00:04.430) 0:00:09.491 ******** 2026-03-07 01:02:14.212852 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-07 01:02:14.212870 | orchestrator | 2026-03-07 01:02:14.212888 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-03-07 01:02:14.212906 | orchestrator | Saturday 07 March 2026 01:01:36 +0000 (0:00:01.069) 0:00:10.560 ******** 2026-03-07 01:02:14.212926 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-03-07 01:02:14.212944 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-07 01:02:14.213016 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-07 01:02:14.213035 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-03-07 01:02:14.213046 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-07 01:02:14.213058 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-03-07 01:02:14.213068 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-03-07 01:02:14.213079 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-03-07 01:02:14.213090 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-03-07 01:02:14.213101 | orchestrator | 2026-03-07 01:02:14.213112 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-03-07 01:02:14.213122 | orchestrator | Saturday 07 March 2026 01:01:51 +0000 (0:00:14.847) 0:00:25.407 ******** 2026-03-07 01:02:14.213133 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-03-07 01:02:14.213144 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-03-07 01:02:14.213155 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-07 01:02:14.213166 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-07 01:02:14.213189 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-07 01:02:14.213200 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-07 01:02:14.213211 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-03-07 01:02:14.213231 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-03-07 01:02:14.213243 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-03-07 01:02:14.213256 | orchestrator | 2026-03-07 01:02:14.213269 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-03-07 01:02:14.213283 | orchestrator | Saturday 07 March 2026 01:01:54 +0000 (0:00:03.323) 0:00:28.731 ******** 2026-03-07 01:02:14.213297 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-03-07 01:02:14.213320 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-07 01:02:14.213333 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-07 01:02:14.213346 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-03-07 01:02:14.213359 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-07 01:02:14.213372 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-03-07 01:02:14.213385 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-03-07 01:02:14.213397 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-03-07 01:02:14.213408 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-03-07 01:02:14.213419 | orchestrator | 2026-03-07 01:02:14.213430 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 01:02:14.213441 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 01:02:14.213453 | orchestrator | 2026-03-07 01:02:14.213464 | orchestrator | 2026-03-07 01:02:14.213475 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 01:02:14.213486 | orchestrator | Saturday 07 March 2026 01:02:02 +0000 (0:00:07.584) 0:00:36.316 ******** 2026-03-07 01:02:14.213496 | orchestrator | =============================================================================== 2026-03-07 01:02:14.213507 | orchestrator | Write ceph keys to the share directory --------------------------------- 14.85s 2026-03-07 01:02:14.213518 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.59s 2026-03-07 01:02:14.213529 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.87s 2026-03-07 01:02:14.213540 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.43s 2026-03-07 01:02:14.213551 | orchestrator | Check if target directories exist --------------------------------------- 3.32s 2026-03-07 01:02:14.213562 | orchestrator | Create share directory -------------------------------------------------- 1.07s 2026-03-07 01:02:14.213573 | orchestrator | 2026-03-07 01:02:14.213583 | orchestrator | 2026-03-07 01:02:14.213594 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-07 01:02:14.213605 | orchestrator | 2026-03-07 01:02:14.213616 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-07 01:02:14.213627 | orchestrator | Saturday 07 March 2026 01:00:23 +0000 (0:00:00.260) 0:00:00.260 ******** 2026-03-07 01:02:14.213638 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:02:14.213649 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:02:14.213660 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:02:14.213671 | orchestrator | 2026-03-07 01:02:14.213682 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-07 01:02:14.213693 | orchestrator | Saturday 07 March 2026 01:00:24 +0000 (0:00:00.319) 0:00:00.579 ******** 2026-03-07 01:02:14.213704 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-03-07 01:02:14.213716 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-03-07 01:02:14.213726 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-03-07 01:02:14.213737 | orchestrator | 2026-03-07 01:02:14.213748 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-03-07 01:02:14.213759 | orchestrator | 2026-03-07 01:02:14.213770 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-07 01:02:14.213781 | orchestrator | Saturday 07 March 2026 01:00:24 +0000 (0:00:00.468) 0:00:01.047 ******** 2026-03-07 01:02:14.213792 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:02:14.213803 | orchestrator | 2026-03-07 01:02:14.213814 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-03-07 01:02:14.213825 | orchestrator | Saturday 07 March 2026 01:00:25 +0000 (0:00:00.570) 0:00:01.618 ******** 2026-03-07 01:02:14.213868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-07 01:02:14.213885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-07 01:02:14.213933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-07 01:02:14.213988 | orchestrator | 2026-03-07 01:02:14.214008 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-03-07 01:02:14.214091 | orchestrator | Saturday 07 March 2026 01:00:26 +0000 (0:00:01.373) 0:00:02.991 ******** 2026-03-07 01:02:14.214110 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:02:14.214127 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:02:14.214139 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:02:14.214149 | orchestrator | 2026-03-07 01:02:14.214160 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-07 01:02:14.214171 | orchestrator | Saturday 07 March 2026 01:00:27 +0000 (0:00:00.580) 0:00:03.572 ******** 2026-03-07 01:02:14.214183 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-07 01:02:14.214194 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-07 01:02:14.214205 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-03-07 01:02:14.214216 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-03-07 01:02:14.214227 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-03-07 01:02:14.214238 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-03-07 01:02:14.214249 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-03-07 01:02:14.214259 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-03-07 01:02:14.214280 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-07 01:02:14.214292 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-07 01:02:14.214302 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-03-07 01:02:14.214313 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-03-07 01:02:14.214324 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-03-07 01:02:14.214335 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-03-07 01:02:14.214346 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-03-07 01:02:14.214356 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-03-07 01:02:14.214367 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-07 01:02:14.214378 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-07 01:02:14.214389 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-03-07 01:02:14.214400 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-03-07 01:02:14.214411 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-03-07 01:02:14.214430 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-03-07 01:02:14.214442 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-03-07 01:02:14.214453 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-03-07 01:02:14.214472 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-03-07 01:02:14.214485 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-03-07 01:02:14.214496 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-03-07 01:02:14.214507 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-03-07 01:02:14.214518 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-03-07 01:02:14.214529 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-03-07 01:02:14.214539 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-03-07 01:02:14.214550 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-03-07 01:02:14.214561 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-03-07 01:02:14.214572 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-03-07 01:02:14.214583 | orchestrator | 2026-03-07 01:02:14.214594 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-07 01:02:14.214605 | orchestrator | Saturday 07 March 2026 01:00:28 +0000 (0:00:00.825) 0:00:04.398 ******** 2026-03-07 01:02:14.214616 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:02:14.214633 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:02:14.214644 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:02:14.214655 | orchestrator | 2026-03-07 01:02:14.214666 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-07 01:02:14.214677 | orchestrator | Saturday 07 March 2026 01:00:28 +0000 (0:00:00.334) 0:00:04.732 ******** 2026-03-07 01:02:14.214688 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:02:14.214699 | orchestrator | 2026-03-07 01:02:14.214709 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-07 01:02:14.214720 | orchestrator | Saturday 07 March 2026 01:00:28 +0000 (0:00:00.162) 0:00:04.895 ******** 2026-03-07 01:02:14.214731 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:02:14.214742 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:02:14.214753 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:02:14.214763 | orchestrator | 2026-03-07 01:02:14.214774 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-07 01:02:14.214785 | orchestrator | Saturday 07 March 2026 01:00:29 +0000 (0:00:00.518) 0:00:05.413 ******** 2026-03-07 01:02:14.214796 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:02:14.214807 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:02:14.214817 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:02:14.214828 | orchestrator | 2026-03-07 01:02:14.214839 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-07 01:02:14.214850 | orchestrator | Saturday 07 March 2026 01:00:29 +0000 (0:00:00.332) 0:00:05.746 ******** 2026-03-07 01:02:14.214861 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:02:14.214871 | orchestrator | 2026-03-07 01:02:14.214882 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-07 01:02:14.214893 | orchestrator | Saturday 07 March 2026 01:00:29 +0000 (0:00:00.143) 0:00:05.889 ******** 2026-03-07 01:02:14.214904 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:02:14.214914 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:02:14.214925 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:02:14.214936 | orchestrator | 2026-03-07 01:02:14.214947 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-07 01:02:14.214988 | orchestrator | Saturday 07 March 2026 01:00:29 +0000 (0:00:00.296) 0:00:06.186 ******** 2026-03-07 01:02:14.215000 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:02:14.215011 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:02:14.215022 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:02:14.215033 | orchestrator | 2026-03-07 01:02:14.215043 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-07 01:02:14.215054 | orchestrator | Saturday 07 March 2026 01:00:30 +0000 (0:00:00.364) 0:00:06.550 ******** 2026-03-07 01:02:14.215065 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:02:14.215076 | orchestrator | 2026-03-07 01:02:14.215087 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-07 01:02:14.215098 | orchestrator | Saturday 07 March 2026 01:00:30 +0000 (0:00:00.386) 0:00:06.937 ******** 2026-03-07 01:02:14.215109 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:02:14.215120 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:02:14.215130 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:02:14.215141 | orchestrator | 2026-03-07 01:02:14.215158 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-07 01:02:14.215169 | orchestrator | Saturday 07 March 2026 01:00:30 +0000 (0:00:00.327) 0:00:07.264 ******** 2026-03-07 01:02:14.215180 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:02:14.215191 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:02:14.215202 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:02:14.215213 | orchestrator | 2026-03-07 01:02:14.215224 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-07 01:02:14.215241 | orchestrator | Saturday 07 March 2026 01:00:31 +0000 (0:00:00.358) 0:00:07.623 ******** 2026-03-07 01:02:14.215252 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:02:14.215263 | orchestrator | 2026-03-07 01:02:14.215281 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-07 01:02:14.215292 | orchestrator | Saturday 07 March 2026 01:00:31 +0000 (0:00:00.185) 0:00:07.808 ******** 2026-03-07 01:02:14.215303 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:02:14.215313 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:02:14.215324 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:02:14.215335 | orchestrator | 2026-03-07 01:02:14.215346 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-07 01:02:14.215357 | orchestrator | Saturday 07 March 2026 01:00:31 +0000 (0:00:00.304) 0:00:08.113 ******** 2026-03-07 01:02:14.215368 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:02:14.215378 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:02:14.215389 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:02:14.215400 | orchestrator | 2026-03-07 01:02:14.215411 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-07 01:02:14.215422 | orchestrator | Saturday 07 March 2026 01:00:32 +0000 (0:00:00.635) 0:00:08.749 ******** 2026-03-07 01:02:14.215433 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:02:14.215444 | orchestrator | 2026-03-07 01:02:14.215455 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-07 01:02:14.215465 | orchestrator | Saturday 07 March 2026 01:00:32 +0000 (0:00:00.143) 0:00:08.893 ******** 2026-03-07 01:02:14.215476 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:02:14.215487 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:02:14.215498 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:02:14.215509 | orchestrator | 2026-03-07 01:02:14.215520 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-07 01:02:14.215530 | orchestrator | Saturday 07 March 2026 01:00:32 +0000 (0:00:00.314) 0:00:09.207 ******** 2026-03-07 01:02:14.215541 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:02:14.215552 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:02:14.215563 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:02:14.215574 | orchestrator | 2026-03-07 01:02:14.215585 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-07 01:02:14.215596 | orchestrator | Saturday 07 March 2026 01:00:33 +0000 (0:00:00.408) 0:00:09.615 ******** 2026-03-07 01:02:14.215607 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:02:14.215618 | orchestrator | 2026-03-07 01:02:14.215629 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-07 01:02:14.215639 | orchestrator | Saturday 07 March 2026 01:00:33 +0000 (0:00:00.144) 0:00:09.760 ******** 2026-03-07 01:02:14.215650 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:02:14.215661 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:02:14.215672 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:02:14.215683 | orchestrator | 2026-03-07 01:02:14.215694 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-07 01:02:14.215705 | orchestrator | Saturday 07 March 2026 01:00:33 +0000 (0:00:00.343) 0:00:10.103 ******** 2026-03-07 01:02:14.215715 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:02:14.215726 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:02:14.215737 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:02:14.215748 | orchestrator | 2026-03-07 01:02:14.215759 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-07 01:02:14.215770 | orchestrator | Saturday 07 March 2026 01:00:34 +0000 (0:00:00.565) 0:00:10.669 ******** 2026-03-07 01:02:14.215781 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:02:14.215792 | orchestrator | 2026-03-07 01:02:14.215803 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-07 01:02:14.215816 | orchestrator | Saturday 07 March 2026 01:00:34 +0000 (0:00:00.153) 0:00:10.822 ******** 2026-03-07 01:02:14.215834 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:02:14.215845 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:02:14.215856 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:02:14.215867 | orchestrator | 2026-03-07 01:02:14.215878 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-07 01:02:14.215895 | orchestrator | Saturday 07 March 2026 01:00:34 +0000 (0:00:00.307) 0:00:11.129 ******** 2026-03-07 01:02:14.215906 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:02:14.215917 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:02:14.215928 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:02:14.215939 | orchestrator | 2026-03-07 01:02:14.215973 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-07 01:02:14.215994 | orchestrator | Saturday 07 March 2026 01:00:35 +0000 (0:00:00.376) 0:00:11.506 ******** 2026-03-07 01:02:14.216013 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:02:14.216031 | orchestrator | 2026-03-07 01:02:14.216049 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-07 01:02:14.216067 | orchestrator | Saturday 07 March 2026 01:00:35 +0000 (0:00:00.147) 0:00:11.653 ******** 2026-03-07 01:02:14.216078 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:02:14.216089 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:02:14.216100 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:02:14.216111 | orchestrator | 2026-03-07 01:02:14.216122 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-07 01:02:14.216133 | orchestrator | Saturday 07 March 2026 01:00:35 +0000 (0:00:00.546) 0:00:12.200 ******** 2026-03-07 01:02:14.216144 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:02:14.216155 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:02:14.216165 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:02:14.216176 | orchestrator | 2026-03-07 01:02:14.216187 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-07 01:02:14.216198 | orchestrator | Saturday 07 March 2026 01:00:36 +0000 (0:00:00.351) 0:00:12.552 ******** 2026-03-07 01:02:14.216216 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:02:14.216228 | orchestrator | 2026-03-07 01:02:14.216239 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-07 01:02:14.216250 | orchestrator | Saturday 07 March 2026 01:00:36 +0000 (0:00:00.148) 0:00:12.700 ******** 2026-03-07 01:02:14.216261 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:02:14.216272 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:02:14.216283 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:02:14.216294 | orchestrator | 2026-03-07 01:02:14.216317 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-07 01:02:14.216328 | orchestrator | Saturday 07 March 2026 01:00:36 +0000 (0:00:00.326) 0:00:13.026 ******** 2026-03-07 01:02:14.216339 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:02:14.216350 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:02:14.216361 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:02:14.216372 | orchestrator | 2026-03-07 01:02:14.216383 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-07 01:02:14.216394 | orchestrator | Saturday 07 March 2026 01:00:37 +0000 (0:00:00.357) 0:00:13.384 ******** 2026-03-07 01:02:14.216405 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:02:14.216416 | orchestrator | 2026-03-07 01:02:14.216427 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-07 01:02:14.216438 | orchestrator | Saturday 07 March 2026 01:00:37 +0000 (0:00:00.142) 0:00:13.526 ******** 2026-03-07 01:02:14.216449 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:02:14.216460 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:02:14.216471 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:02:14.216482 | orchestrator | 2026-03-07 01:02:14.216493 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-03-07 01:02:14.216504 | orchestrator | Saturday 07 March 2026 01:00:37 +0000 (0:00:00.574) 0:00:14.101 ******** 2026-03-07 01:02:14.216515 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:02:14.216526 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:02:14.216537 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:02:14.216548 | orchestrator | 2026-03-07 01:02:14.216559 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-03-07 01:02:14.216577 | orchestrator | Saturday 07 March 2026 01:00:39 +0000 (0:00:01.840) 0:00:15.941 ******** 2026-03-07 01:02:14.216589 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-07 01:02:14.216599 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-07 01:02:14.216610 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-07 01:02:14.216622 | orchestrator | 2026-03-07 01:02:14.216632 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-03-07 01:02:14.216643 | orchestrator | Saturday 07 March 2026 01:00:41 +0000 (0:00:02.223) 0:00:18.165 ******** 2026-03-07 01:02:14.216654 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-07 01:02:14.216666 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-07 01:02:14.216677 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-07 01:02:14.216688 | orchestrator | 2026-03-07 01:02:14.216699 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-03-07 01:02:14.216710 | orchestrator | Saturday 07 March 2026 01:00:44 +0000 (0:00:02.716) 0:00:20.882 ******** 2026-03-07 01:02:14.216721 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-07 01:02:14.216732 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-07 01:02:14.216743 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-07 01:02:14.216754 | orchestrator | 2026-03-07 01:02:14.216765 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-03-07 01:02:14.216777 | orchestrator | Saturday 07 March 2026 01:00:46 +0000 (0:00:02.382) 0:00:23.265 ******** 2026-03-07 01:02:14.216788 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:02:14.216799 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:02:14.216810 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:02:14.216821 | orchestrator | 2026-03-07 01:02:14.216832 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-03-07 01:02:14.216843 | orchestrator | Saturday 07 March 2026 01:00:47 +0000 (0:00:00.335) 0:00:23.600 ******** 2026-03-07 01:02:14.216856 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:02:14.216875 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:02:14.216892 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:02:14.216910 | orchestrator | 2026-03-07 01:02:14.216928 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-07 01:02:14.216947 | orchestrator | Saturday 07 March 2026 01:00:47 +0000 (0:00:00.288) 0:00:23.889 ******** 2026-03-07 01:02:14.217008 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:02:14.217027 | orchestrator | 2026-03-07 01:02:14.217044 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-03-07 01:02:14.217060 | orchestrator | Saturday 07 March 2026 01:00:48 +0000 (0:00:00.904) 0:00:24.794 ******** 2026-03-07 01:02:14.217092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-07 01:02:14.217121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-07 01:02:14.217148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-07 01:02:14.217168 | orchestrator | 2026-03-07 01:02:14.217184 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-03-07 01:02:14.217203 | orchestrator | Saturday 07 March 2026 01:00:50 +0000 (0:00:01.988) 0:00:26.782 ******** 2026-03-07 01:02:14.217234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-07 01:02:14.217265 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:02:14.217291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-07 01:02:14.217305 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:02:14.217331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-07 01:02:14.217351 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:02:14.217362 | orchestrator | 2026-03-07 01:02:14.217373 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-03-07 01:02:14.217384 | orchestrator | Saturday 07 March 2026 01:00:51 +0000 (0:00:00.795) 0:00:27.578 ******** 2026-03-07 01:02:14.217397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-07 01:02:14.217409 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:02:14.217434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-07 01:02:14.217455 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:02:14.217468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-07 01:02:14.217480 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:02:14.217491 | orchestrator | 2026-03-07 01:02:14.217502 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-03-07 01:02:14.217513 | orchestrator | Saturday 07 March 2026 01:00:52 +0000 (0:00:01.157) 0:00:28.735 ******** 2026-03-07 01:02:14.217541 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-07 01:02:14.217562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-07 01:02:14.217606 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-07 01:02:14.217627 | orchestrator | 2026-03-07 01:02:14.217647 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-07 01:02:14.217665 | orchestrator | Saturday 07 March 2026 01:00:54 +0000 (0:00:01.895) 0:00:30.631 ******** 2026-03-07 01:02:14.217684 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:02:14.217703 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:02:14.217721 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:02:14.217736 | orchestrator | 2026-03-07 01:02:14.217747 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-07 01:02:14.217759 | orchestrator | Saturday 07 March 2026 01:00:54 +0000 (0:00:00.353) 0:00:30.984 ******** 2026-03-07 01:02:14.217770 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:02:14.217781 | orchestrator | 2026-03-07 01:02:14.217792 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-03-07 01:02:14.217803 | orchestrator | Saturday 07 March 2026 01:00:55 +0000 (0:00:00.831) 0:00:31.815 ******** 2026-03-07 01:02:14.217814 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:02:14.217824 | orchestrator | 2026-03-07 01:02:14.217836 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-03-07 01:02:14.217847 | orchestrator | Saturday 07 March 2026 01:00:58 +0000 (0:00:02.642) 0:00:34.458 ******** 2026-03-07 01:02:14.217858 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:02:14.217869 | orchestrator | 2026-03-07 01:02:14.217880 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-03-07 01:02:14.217891 | orchestrator | Saturday 07 March 2026 01:01:00 +0000 (0:00:02.729) 0:00:37.187 ******** 2026-03-07 01:02:14.217902 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:02:14.217926 | orchestrator | 2026-03-07 01:02:14.217944 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-07 01:02:14.218065 | orchestrator | Saturday 07 March 2026 01:01:18 +0000 (0:00:17.295) 0:00:54.483 ******** 2026-03-07 01:02:14.218078 | orchestrator | 2026-03-07 01:02:14.218089 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-07 01:02:14.218100 | orchestrator | Saturday 07 March 2026 01:01:18 +0000 (0:00:00.070) 0:00:54.553 ******** 2026-03-07 01:02:14.218111 | orchestrator | 2026-03-07 01:02:14.218121 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-07 01:02:14.218132 | orchestrator | Saturday 07 March 2026 01:01:18 +0000 (0:00:00.084) 0:00:54.637 ******** 2026-03-07 01:02:14.218143 | orchestrator | 2026-03-07 01:02:14.218154 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-03-07 01:02:14.218165 | orchestrator | Saturday 07 March 2026 01:01:18 +0000 (0:00:00.078) 0:00:54.716 ******** 2026-03-07 01:02:14.218176 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:02:14.218187 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:02:14.218198 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:02:14.218209 | orchestrator | 2026-03-07 01:02:14.218219 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 01:02:14.218231 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-07 01:02:14.218251 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-07 01:02:14.218262 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-07 01:02:14.218273 | orchestrator | 2026-03-07 01:02:14.218285 | orchestrator | 2026-03-07 01:02:14.218296 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 01:02:14.218313 | orchestrator | Saturday 07 March 2026 01:02:12 +0000 (0:00:53.682) 0:01:48.399 ******** 2026-03-07 01:02:14.218324 | orchestrator | =============================================================================== 2026-03-07 01:02:14.218334 | orchestrator | horizon : Restart horizon container ------------------------------------ 53.68s 2026-03-07 01:02:14.218344 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 17.30s 2026-03-07 01:02:14.218353 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.73s 2026-03-07 01:02:14.218363 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.72s 2026-03-07 01:02:14.218373 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.64s 2026-03-07 01:02:14.218382 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.38s 2026-03-07 01:02:14.218394 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.22s 2026-03-07 01:02:14.218411 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.99s 2026-03-07 01:02:14.218427 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.90s 2026-03-07 01:02:14.218445 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.84s 2026-03-07 01:02:14.218462 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.37s 2026-03-07 01:02:14.218478 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.16s 2026-03-07 01:02:14.218495 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.90s 2026-03-07 01:02:14.218505 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.83s 2026-03-07 01:02:14.218515 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.83s 2026-03-07 01:02:14.218524 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.80s 2026-03-07 01:02:14.218534 | orchestrator | horizon : Update policy file name --------------------------------------- 0.64s 2026-03-07 01:02:14.218551 | orchestrator | horizon : Set empty custom policy --------------------------------------- 0.58s 2026-03-07 01:02:14.218561 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.57s 2026-03-07 01:02:14.218571 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.57s 2026-03-07 01:02:17.292880 | orchestrator | 2026-03-07 01:02:17 | INFO  | Task f009412f-3063-4903-abb8-4f4ec4dd6d05 is in state STARTED 2026-03-07 01:02:17.299252 | orchestrator | 2026-03-07 01:02:17 | INFO  | Task e8c16613-ee54-4fe0-9f3e-7f9efe586440 is in state STARTED 2026-03-07 01:02:17.299643 | orchestrator | 2026-03-07 01:02:17 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:02:20.341498 | orchestrator | 2026-03-07 01:02:20 | INFO  | Task f009412f-3063-4903-abb8-4f4ec4dd6d05 is in state STARTED 2026-03-07 01:02:20.342668 | orchestrator | 2026-03-07 01:02:20 | INFO  | Task e8c16613-ee54-4fe0-9f3e-7f9efe586440 is in state STARTED 2026-03-07 01:02:20.342716 | orchestrator | 2026-03-07 01:02:20 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:02:23.386332 | orchestrator | 2026-03-07 01:02:23 | INFO  | Task f009412f-3063-4903-abb8-4f4ec4dd6d05 is in state STARTED 2026-03-07 01:02:23.387926 | orchestrator | 2026-03-07 01:02:23 | INFO  | Task e8c16613-ee54-4fe0-9f3e-7f9efe586440 is in state STARTED 2026-03-07 01:02:23.388054 | orchestrator | 2026-03-07 01:02:23 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:02:26.443166 | orchestrator | 2026-03-07 01:02:26 | INFO  | Task f009412f-3063-4903-abb8-4f4ec4dd6d05 is in state STARTED 2026-03-07 01:02:26.443863 | orchestrator | 2026-03-07 01:02:26 | INFO  | Task e8c16613-ee54-4fe0-9f3e-7f9efe586440 is in state STARTED 2026-03-07 01:02:26.443901 | orchestrator | 2026-03-07 01:02:26 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:02:29.496607 | orchestrator | 2026-03-07 01:02:29 | INFO  | Task f009412f-3063-4903-abb8-4f4ec4dd6d05 is in state STARTED 2026-03-07 01:02:29.498413 | orchestrator | 2026-03-07 01:02:29 | INFO  | Task e8c16613-ee54-4fe0-9f3e-7f9efe586440 is in state STARTED 2026-03-07 01:02:29.498488 | orchestrator | 2026-03-07 01:02:29 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:02:32.549921 | orchestrator | 2026-03-07 01:02:32 | INFO  | Task f009412f-3063-4903-abb8-4f4ec4dd6d05 is in state STARTED 2026-03-07 01:02:32.552098 | orchestrator | 2026-03-07 01:02:32 | INFO  | Task e8c16613-ee54-4fe0-9f3e-7f9efe586440 is in state STARTED 2026-03-07 01:02:32.552185 | orchestrator | 2026-03-07 01:02:32 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:02:35.598073 | orchestrator | 2026-03-07 01:02:35 | INFO  | Task f009412f-3063-4903-abb8-4f4ec4dd6d05 is in state STARTED 2026-03-07 01:02:35.599390 | orchestrator | 2026-03-07 01:02:35 | INFO  | Task e8c16613-ee54-4fe0-9f3e-7f9efe586440 is in state STARTED 2026-03-07 01:02:35.599408 | orchestrator | 2026-03-07 01:02:35 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:02:38.642172 | orchestrator | 2026-03-07 01:02:38 | INFO  | Task f009412f-3063-4903-abb8-4f4ec4dd6d05 is in state STARTED 2026-03-07 01:02:38.643372 | orchestrator | 2026-03-07 01:02:38 | INFO  | Task e8c16613-ee54-4fe0-9f3e-7f9efe586440 is in state STARTED 2026-03-07 01:02:38.643408 | orchestrator | 2026-03-07 01:02:38 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:02:41.689174 | orchestrator | 2026-03-07 01:02:41 | INFO  | Task f009412f-3063-4903-abb8-4f4ec4dd6d05 is in state STARTED 2026-03-07 01:02:41.690336 | orchestrator | 2026-03-07 01:02:41 | INFO  | Task e8c16613-ee54-4fe0-9f3e-7f9efe586440 is in state STARTED 2026-03-07 01:02:41.690392 | orchestrator | 2026-03-07 01:02:41 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:02:44.730816 | orchestrator | 2026-03-07 01:02:44 | INFO  | Task f009412f-3063-4903-abb8-4f4ec4dd6d05 is in state STARTED 2026-03-07 01:02:44.732783 | orchestrator | 2026-03-07 01:02:44 | INFO  | Task e8c16613-ee54-4fe0-9f3e-7f9efe586440 is in state STARTED 2026-03-07 01:02:44.732859 | orchestrator | 2026-03-07 01:02:44 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:02:47.783337 | orchestrator | 2026-03-07 01:02:47 | INFO  | Task f009412f-3063-4903-abb8-4f4ec4dd6d05 is in state STARTED 2026-03-07 01:02:47.786090 | orchestrator | 2026-03-07 01:02:47 | INFO  | Task e8c16613-ee54-4fe0-9f3e-7f9efe586440 is in state STARTED 2026-03-07 01:02:47.786142 | orchestrator | 2026-03-07 01:02:47 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:02:50.833484 | orchestrator | 2026-03-07 01:02:50 | INFO  | Task f009412f-3063-4903-abb8-4f4ec4dd6d05 is in state STARTED 2026-03-07 01:02:50.834889 | orchestrator | 2026-03-07 01:02:50 | INFO  | Task e8c16613-ee54-4fe0-9f3e-7f9efe586440 is in state STARTED 2026-03-07 01:02:50.835193 | orchestrator | 2026-03-07 01:02:50 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:02:53.884250 | orchestrator | 2026-03-07 01:02:53 | INFO  | Task f009412f-3063-4903-abb8-4f4ec4dd6d05 is in state STARTED 2026-03-07 01:02:53.885095 | orchestrator | 2026-03-07 01:02:53 | INFO  | Task e8c16613-ee54-4fe0-9f3e-7f9efe586440 is in state STARTED 2026-03-07 01:02:53.885137 | orchestrator | 2026-03-07 01:02:53 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:02:56.931741 | orchestrator | 2026-03-07 01:02:56 | INFO  | Task f009412f-3063-4903-abb8-4f4ec4dd6d05 is in state STARTED 2026-03-07 01:02:56.939118 | orchestrator | 2026-03-07 01:02:56 | INFO  | Task e8c16613-ee54-4fe0-9f3e-7f9efe586440 is in state STARTED 2026-03-07 01:02:56.939230 | orchestrator | 2026-03-07 01:02:56 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:02:59.989132 | orchestrator | 2026-03-07 01:02:59 | INFO  | Task f009412f-3063-4903-abb8-4f4ec4dd6d05 is in state STARTED 2026-03-07 01:02:59.991497 | orchestrator | 2026-03-07 01:02:59 | INFO  | Task e8c16613-ee54-4fe0-9f3e-7f9efe586440 is in state STARTED 2026-03-07 01:02:59.991549 | orchestrator | 2026-03-07 01:02:59 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:03:03.046106 | orchestrator | 2026-03-07 01:03:03 | INFO  | Task f009412f-3063-4903-abb8-4f4ec4dd6d05 is in state STARTED 2026-03-07 01:03:03.049140 | orchestrator | 2026-03-07 01:03:03 | INFO  | Task e8c16613-ee54-4fe0-9f3e-7f9efe586440 is in state STARTED 2026-03-07 01:03:03.049224 | orchestrator | 2026-03-07 01:03:03 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:03:06.096839 | orchestrator | 2026-03-07 01:03:06 | INFO  | Task f009412f-3063-4903-abb8-4f4ec4dd6d05 is in state STARTED 2026-03-07 01:03:06.099372 | orchestrator | 2026-03-07 01:03:06 | INFO  | Task e8c16613-ee54-4fe0-9f3e-7f9efe586440 is in state STARTED 2026-03-07 01:03:06.099458 | orchestrator | 2026-03-07 01:03:06 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:03:09.178421 | orchestrator | 2026-03-07 01:03:09 | INFO  | Task f009412f-3063-4903-abb8-4f4ec4dd6d05 is in state STARTED 2026-03-07 01:03:09.182487 | orchestrator | 2026-03-07 01:03:09 | INFO  | Task ed905e23-4ee6-4dc5-9dc4-361c1c587ec6 is in state STARTED 2026-03-07 01:03:09.188716 | orchestrator | 2026-03-07 01:03:09 | INFO  | Task e8c16613-ee54-4fe0-9f3e-7f9efe586440 is in state SUCCESS 2026-03-07 01:03:09.188833 | orchestrator | 2026-03-07 01:03:09 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:03:09.190355 | orchestrator | 2026-03-07 01:03:09 | INFO  | Task b3ae0acc-eae9-44d9-9bf6-090a2b33ed91 is in state STARTED 2026-03-07 01:03:09.190641 | orchestrator | 2026-03-07 01:03:09 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:03:12.271323 | orchestrator | 2026-03-07 01:03:12 | INFO  | Task f009412f-3063-4903-abb8-4f4ec4dd6d05 is in state STARTED 2026-03-07 01:03:12.274372 | orchestrator | 2026-03-07 01:03:12 | INFO  | Task ed905e23-4ee6-4dc5-9dc4-361c1c587ec6 is in state STARTED 2026-03-07 01:03:12.274928 | orchestrator | 2026-03-07 01:03:12 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:03:12.276823 | orchestrator | 2026-03-07 01:03:12 | INFO  | Task b3ae0acc-eae9-44d9-9bf6-090a2b33ed91 is in state STARTED 2026-03-07 01:03:12.276875 | orchestrator | 2026-03-07 01:03:12 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:03:15.313434 | orchestrator | 2026-03-07 01:03:15 | INFO  | Task f009412f-3063-4903-abb8-4f4ec4dd6d05 is in state STARTED 2026-03-07 01:03:15.313976 | orchestrator | 2026-03-07 01:03:15 | INFO  | Task ed905e23-4ee6-4dc5-9dc4-361c1c587ec6 is in state SUCCESS 2026-03-07 01:03:15.314763 | orchestrator | 2026-03-07 01:03:15 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:03:15.317011 | orchestrator | 2026-03-07 01:03:15 | INFO  | Task b3ae0acc-eae9-44d9-9bf6-090a2b33ed91 is in state STARTED 2026-03-07 01:03:15.317086 | orchestrator | 2026-03-07 01:03:15 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:03:15.317768 | orchestrator | 2026-03-07 01:03:15 | INFO  | Task 581d95b4-6492-4051-a36c-c57f8ea8fbcb is in state STARTED 2026-03-07 01:03:15.317800 | orchestrator | 2026-03-07 01:03:15 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:03:18.358521 | orchestrator | 2026-03-07 01:03:18 | INFO  | Task f009412f-3063-4903-abb8-4f4ec4dd6d05 is in state STARTED 2026-03-07 01:03:18.365133 | orchestrator | 2026-03-07 01:03:18 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:03:18.369379 | orchestrator | 2026-03-07 01:03:18 | INFO  | Task b3ae0acc-eae9-44d9-9bf6-090a2b33ed91 is in state STARTED 2026-03-07 01:03:18.370257 | orchestrator | 2026-03-07 01:03:18 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:03:18.377433 | orchestrator | 2026-03-07 01:03:18 | INFO  | Task 581d95b4-6492-4051-a36c-c57f8ea8fbcb is in state STARTED 2026-03-07 01:03:18.377523 | orchestrator | 2026-03-07 01:03:18 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:03:21.453689 | orchestrator | 2026-03-07 01:03:21 | INFO  | Task f009412f-3063-4903-abb8-4f4ec4dd6d05 is in state SUCCESS 2026-03-07 01:03:21.455902 | orchestrator | 2026-03-07 01:03:21.456107 | orchestrator | 2026-03-07 01:03:21.456455 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-03-07 01:03:21.456466 | orchestrator | 2026-03-07 01:03:21.456474 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-03-07 01:03:21.456483 | orchestrator | Saturday 07 March 2026 01:02:07 +0000 (0:00:00.266) 0:00:00.266 ******** 2026-03-07 01:03:21.456491 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-03-07 01:03:21.456500 | orchestrator | 2026-03-07 01:03:21.456507 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-03-07 01:03:21.456515 | orchestrator | Saturday 07 March 2026 01:02:07 +0000 (0:00:00.237) 0:00:00.503 ******** 2026-03-07 01:03:21.456523 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-03-07 01:03:21.456530 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-03-07 01:03:21.456538 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-03-07 01:03:21.456571 | orchestrator | 2026-03-07 01:03:21.456579 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-03-07 01:03:21.456587 | orchestrator | Saturday 07 March 2026 01:02:09 +0000 (0:00:01.402) 0:00:01.906 ******** 2026-03-07 01:03:21.456595 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-03-07 01:03:21.456603 | orchestrator | 2026-03-07 01:03:21.456610 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-03-07 01:03:21.456620 | orchestrator | Saturday 07 March 2026 01:02:10 +0000 (0:00:01.627) 0:00:03.533 ******** 2026-03-07 01:03:21.456633 | orchestrator | changed: [testbed-manager] 2026-03-07 01:03:21.456650 | orchestrator | 2026-03-07 01:03:21.456664 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-03-07 01:03:21.456675 | orchestrator | Saturday 07 March 2026 01:02:11 +0000 (0:00:00.955) 0:00:04.489 ******** 2026-03-07 01:03:21.456686 | orchestrator | changed: [testbed-manager] 2026-03-07 01:03:21.456698 | orchestrator | 2026-03-07 01:03:21.456710 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-03-07 01:03:21.456722 | orchestrator | Saturday 07 March 2026 01:02:12 +0000 (0:00:00.954) 0:00:05.443 ******** 2026-03-07 01:03:21.456734 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-03-07 01:03:21.456745 | orchestrator | ok: [testbed-manager] 2026-03-07 01:03:21.456757 | orchestrator | 2026-03-07 01:03:21.456770 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-03-07 01:03:21.456781 | orchestrator | Saturday 07 March 2026 01:02:55 +0000 (0:00:42.216) 0:00:47.660 ******** 2026-03-07 01:03:21.456804 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-03-07 01:03:21.456812 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-03-07 01:03:21.456820 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-03-07 01:03:21.456827 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-03-07 01:03:21.456835 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-03-07 01:03:21.456842 | orchestrator | 2026-03-07 01:03:21.457050 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-03-07 01:03:21.457063 | orchestrator | Saturday 07 March 2026 01:02:59 +0000 (0:00:04.779) 0:00:52.440 ******** 2026-03-07 01:03:21.457071 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-03-07 01:03:21.457078 | orchestrator | 2026-03-07 01:03:21.457086 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-03-07 01:03:21.457093 | orchestrator | Saturday 07 March 2026 01:03:00 +0000 (0:00:00.531) 0:00:52.972 ******** 2026-03-07 01:03:21.457101 | orchestrator | skipping: [testbed-manager] 2026-03-07 01:03:21.457108 | orchestrator | 2026-03-07 01:03:21.457116 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-03-07 01:03:21.457124 | orchestrator | Saturday 07 March 2026 01:03:00 +0000 (0:00:00.153) 0:00:53.125 ******** 2026-03-07 01:03:21.457131 | orchestrator | skipping: [testbed-manager] 2026-03-07 01:03:21.457138 | orchestrator | 2026-03-07 01:03:21.457146 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-03-07 01:03:21.457153 | orchestrator | Saturday 07 March 2026 01:03:01 +0000 (0:00:00.549) 0:00:53.675 ******** 2026-03-07 01:03:21.457160 | orchestrator | changed: [testbed-manager] 2026-03-07 01:03:21.457168 | orchestrator | 2026-03-07 01:03:21.457175 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-03-07 01:03:21.457182 | orchestrator | Saturday 07 March 2026 01:03:02 +0000 (0:00:01.476) 0:00:55.152 ******** 2026-03-07 01:03:21.457189 | orchestrator | changed: [testbed-manager] 2026-03-07 01:03:21.457197 | orchestrator | 2026-03-07 01:03:21.457204 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-03-07 01:03:21.457211 | orchestrator | Saturday 07 March 2026 01:03:03 +0000 (0:00:00.767) 0:00:55.920 ******** 2026-03-07 01:03:21.457219 | orchestrator | changed: [testbed-manager] 2026-03-07 01:03:21.457237 | orchestrator | 2026-03-07 01:03:21.457245 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-03-07 01:03:21.457297 | orchestrator | Saturday 07 March 2026 01:03:04 +0000 (0:00:00.693) 0:00:56.613 ******** 2026-03-07 01:03:21.457307 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-03-07 01:03:21.457314 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-03-07 01:03:21.457321 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-03-07 01:03:21.457329 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-03-07 01:03:21.457336 | orchestrator | 2026-03-07 01:03:21.457343 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 01:03:21.457351 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-07 01:03:21.457360 | orchestrator | 2026-03-07 01:03:21.457367 | orchestrator | 2026-03-07 01:03:21.457408 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 01:03:21.457417 | orchestrator | Saturday 07 March 2026 01:03:05 +0000 (0:00:01.704) 0:00:58.318 ******** 2026-03-07 01:03:21.457425 | orchestrator | =============================================================================== 2026-03-07 01:03:21.457432 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 42.22s 2026-03-07 01:03:21.457440 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.78s 2026-03-07 01:03:21.457447 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.70s 2026-03-07 01:03:21.457454 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.63s 2026-03-07 01:03:21.457461 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.48s 2026-03-07 01:03:21.457469 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.40s 2026-03-07 01:03:21.457476 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.96s 2026-03-07 01:03:21.457483 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.95s 2026-03-07 01:03:21.457490 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.77s 2026-03-07 01:03:21.457498 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.69s 2026-03-07 01:03:21.457505 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.55s 2026-03-07 01:03:21.457512 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.53s 2026-03-07 01:03:21.457519 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.24s 2026-03-07 01:03:21.457527 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.15s 2026-03-07 01:03:21.457534 | orchestrator | 2026-03-07 01:03:21.457541 | orchestrator | 2026-03-07 01:03:21.457548 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-07 01:03:21.457555 | orchestrator | 2026-03-07 01:03:21.457563 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-07 01:03:21.457570 | orchestrator | Saturday 07 March 2026 01:03:10 +0000 (0:00:00.198) 0:00:00.198 ******** 2026-03-07 01:03:21.457577 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:03:21.457585 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:03:21.457592 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:03:21.457599 | orchestrator | 2026-03-07 01:03:21.457607 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-07 01:03:21.457614 | orchestrator | Saturday 07 March 2026 01:03:11 +0000 (0:00:00.350) 0:00:00.548 ******** 2026-03-07 01:03:21.457621 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-07 01:03:21.457629 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-07 01:03:21.457643 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-07 01:03:21.457650 | orchestrator | 2026-03-07 01:03:21.457658 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-03-07 01:03:21.457672 | orchestrator | 2026-03-07 01:03:21.457679 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-03-07 01:03:21.457687 | orchestrator | Saturday 07 March 2026 01:03:12 +0000 (0:00:00.867) 0:00:01.415 ******** 2026-03-07 01:03:21.457694 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:03:21.457702 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:03:21.457709 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:03:21.457716 | orchestrator | 2026-03-07 01:03:21.457724 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 01:03:21.457732 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 01:03:21.457740 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 01:03:21.457747 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 01:03:21.457754 | orchestrator | 2026-03-07 01:03:21.457762 | orchestrator | 2026-03-07 01:03:21.457769 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 01:03:21.457776 | orchestrator | Saturday 07 March 2026 01:03:12 +0000 (0:00:00.823) 0:00:02.239 ******** 2026-03-07 01:03:21.457783 | orchestrator | =============================================================================== 2026-03-07 01:03:21.457791 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.87s 2026-03-07 01:03:21.457798 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.82s 2026-03-07 01:03:21.457805 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.35s 2026-03-07 01:03:21.457813 | orchestrator | 2026-03-07 01:03:21.457820 | orchestrator | 2026-03-07 01:03:21.457827 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-07 01:03:21.457835 | orchestrator | 2026-03-07 01:03:21.457842 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-07 01:03:21.457853 | orchestrator | Saturday 07 March 2026 01:00:23 +0000 (0:00:00.266) 0:00:00.266 ******** 2026-03-07 01:03:21.457865 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:03:21.457878 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:03:21.457889 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:03:21.457901 | orchestrator | 2026-03-07 01:03:21.457912 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-07 01:03:21.457924 | orchestrator | Saturday 07 March 2026 01:00:24 +0000 (0:00:00.335) 0:00:00.602 ******** 2026-03-07 01:03:21.457935 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-07 01:03:21.457946 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-07 01:03:21.457959 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-07 01:03:21.457971 | orchestrator | 2026-03-07 01:03:21.458006 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-03-07 01:03:21.458074 | orchestrator | 2026-03-07 01:03:21.458135 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-07 01:03:21.458147 | orchestrator | Saturday 07 March 2026 01:00:24 +0000 (0:00:00.572) 0:00:01.174 ******** 2026-03-07 01:03:21.458157 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:03:21.458166 | orchestrator | 2026-03-07 01:03:21.458174 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-03-07 01:03:21.458182 | orchestrator | Saturday 07 March 2026 01:00:25 +0000 (0:00:00.600) 0:00:01.775 ******** 2026-03-07 01:03:21.458196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-07 01:03:21.458224 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-07 01:03:21.458236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-07 01:03:21.458269 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-07 01:03:21.458281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-07 01:03:21.458295 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-07 01:03:21.458303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-07 01:03:21.458315 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-07 01:03:21.458323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-07 01:03:21.458331 | orchestrator | 2026-03-07 01:03:21.458339 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-03-07 01:03:21.458346 | orchestrator | Saturday 07 March 2026 01:00:27 +0000 (0:00:01.925) 0:00:03.701 ******** 2026-03-07 01:03:21.458354 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:03:21.458361 | orchestrator | 2026-03-07 01:03:21.458369 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-03-07 01:03:21.458376 | orchestrator | Saturday 07 March 2026 01:00:27 +0000 (0:00:00.154) 0:00:03.855 ******** 2026-03-07 01:03:21.458384 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:03:21.458391 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:03:21.458399 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:03:21.458502 | orchestrator | 2026-03-07 01:03:21.458513 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-03-07 01:03:21.458521 | orchestrator | Saturday 07 March 2026 01:00:27 +0000 (0:00:00.448) 0:00:04.303 ******** 2026-03-07 01:03:21.458528 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-07 01:03:21.458536 | orchestrator | 2026-03-07 01:03:21.458543 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-07 01:03:21.458551 | orchestrator | Saturday 07 March 2026 01:00:28 +0000 (0:00:00.840) 0:00:05.144 ******** 2026-03-07 01:03:21.458577 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:03:21.458593 | orchestrator | 2026-03-07 01:03:21.458601 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-03-07 01:03:21.458608 | orchestrator | Saturday 07 March 2026 01:00:29 +0000 (0:00:00.603) 0:00:05.748 ******** 2026-03-07 01:03:21.458616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-07 01:03:21.458630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-07 01:03:21.458639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-07 01:03:21.458647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-07 01:03:21.458671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-07 01:03:21.458679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-07 01:03:21.458687 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-07 01:03:21.458699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-07 01:03:21.458707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-07 01:03:21.458715 | orchestrator | 2026-03-07 01:03:21.458722 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-03-07 01:03:21.458730 | orchestrator | Saturday 07 March 2026 01:00:33 +0000 (0:00:03.844) 0:00:09.592 ******** 2026-03-07 01:03:21.458743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-07 01:03:21.458756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-07 01:03:21.458764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-07 01:03:21.458772 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:03:21.458788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-07 01:03:21.458797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-07 01:03:21.458805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-07 01:03:21.458817 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:03:21.458832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-07 01:03:21.458840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-07 01:03:21.458848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-07 01:03:21.458856 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:03:21.458863 | orchestrator | 2026-03-07 01:03:21.458874 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-03-07 01:03:21.458882 | orchestrator | Saturday 07 March 2026 01:00:33 +0000 (0:00:00.634) 0:00:10.227 ******** 2026-03-07 01:03:21.458890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-07 01:03:21.458898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-07 01:03:21.458919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-07 01:03:21.458927 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:03:21.458935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-07 01:03:21.458947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-07 01:03:21.458955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-07 01:03:21.458963 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:03:21.458971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-07 01:03:21.459015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-07 01:03:21.459031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-07 01:03:21.459040 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:03:21.459047 | orchestrator | 2026-03-07 01:03:21.459055 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-03-07 01:03:21.459062 | orchestrator | Saturday 07 March 2026 01:00:34 +0000 (0:00:00.778) 0:00:11.005 ******** 2026-03-07 01:03:21.459075 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-07 01:03:21.459087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-07 01:03:21.459121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-07 01:03:21.459139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-07 01:03:21.459152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-07 01:03:21.459164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-07 01:03:21.459183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-07 01:03:21.459196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-07 01:03:21.459216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-07 01:03:21.459229 | orchestrator | 2026-03-07 01:03:21.459241 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-03-07 01:03:21.459254 | orchestrator | Saturday 07 March 2026 01:00:38 +0000 (0:00:03.796) 0:00:14.801 ******** 2026-03-07 01:03:21.459277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-07 01:03:21.459292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-07 01:03:21.459312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-07 01:03:21.459335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-07 01:03:21.459351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-07 01:03:21.459362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-07 01:03:21.459371 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-07 01:03:21.459382 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-07 01:03:21.459390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-07 01:03:21.459405 | orchestrator | 2026-03-07 01:03:21.459412 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-03-07 01:03:21.459419 | orchestrator | Saturday 07 March 2026 01:00:44 +0000 (0:00:06.023) 0:00:20.824 ******** 2026-03-07 01:03:21.459427 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:03:21.459434 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:03:21.459442 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:03:21.459449 | orchestrator | 2026-03-07 01:03:21.459457 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-03-07 01:03:21.459464 | orchestrator | Saturday 07 March 2026 01:00:46 +0000 (0:00:01.644) 0:00:22.469 ******** 2026-03-07 01:03:21.459471 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:03:21.459478 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:03:21.459486 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:03:21.459493 | orchestrator | 2026-03-07 01:03:21.459501 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-03-07 01:03:21.459508 | orchestrator | Saturday 07 March 2026 01:00:46 +0000 (0:00:00.610) 0:00:23.080 ******** 2026-03-07 01:03:21.459515 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:03:21.459523 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:03:21.459530 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:03:21.459537 | orchestrator | 2026-03-07 01:03:21.459545 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-03-07 01:03:21.459676 | orchestrator | Saturday 07 March 2026 01:00:46 +0000 (0:00:00.347) 0:00:23.427 ******** 2026-03-07 01:03:21.459688 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:03:21.459696 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:03:21.459704 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:03:21.459712 | orchestrator | 2026-03-07 01:03:21.459720 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-03-07 01:03:21.459728 | orchestrator | Saturday 07 March 2026 01:00:47 +0000 (0:00:00.738) 0:00:24.166 ******** 2026-03-07 01:03:21.459745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-07 01:03:21.459755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-07 01:03:21.459778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-07 01:03:21.459787 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:03:21.459796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-07 01:03:21.459804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-07 01:03:21.459819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-07 01:03:21.459827 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:03:21.459836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-07 01:03:21.459854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-07 01:03:21.459862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-07 01:03:21.459870 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:03:21.459878 | orchestrator | 2026-03-07 01:03:21.459886 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-07 01:03:21.459894 | orchestrator | Saturday 07 March 2026 01:00:48 +0000 (0:00:00.627) 0:00:24.793 ******** 2026-03-07 01:03:21.459901 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:03:21.459909 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:03:21.459916 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:03:21.459924 | orchestrator | 2026-03-07 01:03:21.459931 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-03-07 01:03:21.459939 | orchestrator | Saturday 07 March 2026 01:00:48 +0000 (0:00:00.314) 0:00:25.108 ******** 2026-03-07 01:03:21.459946 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-07 01:03:21.459954 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-07 01:03:21.459962 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-07 01:03:21.459969 | orchestrator | 2026-03-07 01:03:21.459977 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-03-07 01:03:21.460016 | orchestrator | Saturday 07 March 2026 01:00:50 +0000 (0:00:01.829) 0:00:26.937 ******** 2026-03-07 01:03:21.460024 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-07 01:03:21.460032 | orchestrator | 2026-03-07 01:03:21.460040 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-03-07 01:03:21.460047 | orchestrator | Saturday 07 March 2026 01:00:51 +0000 (0:00:01.193) 0:00:28.131 ******** 2026-03-07 01:03:21.460054 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:03:21.460061 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:03:21.460069 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:03:21.460076 | orchestrator | 2026-03-07 01:03:21.460083 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-03-07 01:03:21.460090 | orchestrator | Saturday 07 March 2026 01:00:52 +0000 (0:00:01.096) 0:00:29.227 ******** 2026-03-07 01:03:21.460103 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-07 01:03:21.460111 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-07 01:03:21.460118 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-07 01:03:21.460125 | orchestrator | 2026-03-07 01:03:21.460133 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-03-07 01:03:21.460147 | orchestrator | Saturday 07 March 2026 01:00:54 +0000 (0:00:01.587) 0:00:30.815 ******** 2026-03-07 01:03:21.460155 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:03:21.460162 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:03:21.460170 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:03:21.460178 | orchestrator | 2026-03-07 01:03:21.460185 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-03-07 01:03:21.460192 | orchestrator | Saturday 07 March 2026 01:00:54 +0000 (0:00:00.341) 0:00:31.156 ******** 2026-03-07 01:03:21.460200 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-07 01:03:21.460208 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-07 01:03:21.460215 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-07 01:03:21.460222 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-07 01:03:21.460230 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-07 01:03:21.460237 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-07 01:03:21.460245 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-07 01:03:21.460252 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-07 01:03:21.460260 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-07 01:03:21.460267 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-07 01:03:21.460275 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-07 01:03:21.460282 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-07 01:03:21.460289 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-07 01:03:21.460301 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-07 01:03:21.460309 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-07 01:03:21.460317 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-07 01:03:21.460326 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-07 01:03:21.460335 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-07 01:03:21.460344 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-07 01:03:21.460352 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-07 01:03:21.460361 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-07 01:03:21.460370 | orchestrator | 2026-03-07 01:03:21.460378 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-03-07 01:03:21.460387 | orchestrator | Saturday 07 March 2026 01:01:03 +0000 (0:00:09.158) 0:00:40.315 ******** 2026-03-07 01:03:21.460395 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-07 01:03:21.460404 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-07 01:03:21.460413 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-07 01:03:21.460422 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-07 01:03:21.460430 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-07 01:03:21.460443 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-07 01:03:21.460452 | orchestrator | 2026-03-07 01:03:21.460460 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-03-07 01:03:21.460469 | orchestrator | Saturday 07 March 2026 01:01:06 +0000 (0:00:03.138) 0:00:43.453 ******** 2026-03-07 01:03:21.460486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-07 01:03:21.460497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-07 01:03:21.460511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-07 01:03:21.460521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-07 01:03:21.460536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-07 01:03:21.460549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-07 01:03:21.460559 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-07 01:03:21.460567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-07 01:03:21.460580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-07 01:03:21.460588 | orchestrator | 2026-03-07 01:03:21.460596 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-07 01:03:21.460603 | orchestrator | Saturday 07 March 2026 01:01:09 +0000 (0:00:02.419) 0:00:45.873 ******** 2026-03-07 01:03:21.460611 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:03:21.460618 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:03:21.460626 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:03:21.460633 | orchestrator | 2026-03-07 01:03:21.460641 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-03-07 01:03:21.460649 | orchestrator | Saturday 07 March 2026 01:01:09 +0000 (0:00:00.325) 0:00:46.199 ******** 2026-03-07 01:03:21.460661 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:03:21.460668 | orchestrator | 2026-03-07 01:03:21.460675 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-03-07 01:03:21.460683 | orchestrator | Saturday 07 March 2026 01:01:12 +0000 (0:00:02.493) 0:00:48.692 ******** 2026-03-07 01:03:21.460690 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:03:21.460698 | orchestrator | 2026-03-07 01:03:21.460705 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-03-07 01:03:21.460712 | orchestrator | Saturday 07 March 2026 01:01:14 +0000 (0:00:02.385) 0:00:51.078 ******** 2026-03-07 01:03:21.460720 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:03:21.460727 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:03:21.460734 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:03:21.460741 | orchestrator | 2026-03-07 01:03:21.460749 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-03-07 01:03:21.460756 | orchestrator | Saturday 07 March 2026 01:01:15 +0000 (0:00:01.094) 0:00:52.172 ******** 2026-03-07 01:03:21.460764 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:03:21.460772 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:03:21.460779 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:03:21.460786 | orchestrator | 2026-03-07 01:03:21.460793 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-03-07 01:03:21.460801 | orchestrator | Saturday 07 March 2026 01:01:16 +0000 (0:00:00.388) 0:00:52.561 ******** 2026-03-07 01:03:21.460808 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:03:21.460816 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:03:21.460823 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:03:21.460830 | orchestrator | 2026-03-07 01:03:21.460837 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-03-07 01:03:21.460845 | orchestrator | Saturday 07 March 2026 01:01:16 +0000 (0:00:00.331) 0:00:52.892 ******** 2026-03-07 01:03:21.460852 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:03:21.460859 | orchestrator | 2026-03-07 01:03:21.460866 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-03-07 01:03:21.460874 | orchestrator | Saturday 07 March 2026 01:01:32 +0000 (0:00:16.084) 0:01:08.977 ******** 2026-03-07 01:03:21.460881 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:03:21.460889 | orchestrator | 2026-03-07 01:03:21.460900 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-07 01:03:21.460908 | orchestrator | Saturday 07 March 2026 01:01:44 +0000 (0:00:11.752) 0:01:20.729 ******** 2026-03-07 01:03:21.460915 | orchestrator | 2026-03-07 01:03:21.460923 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-07 01:03:21.460930 | orchestrator | Saturday 07 March 2026 01:01:44 +0000 (0:00:00.127) 0:01:20.857 ******** 2026-03-07 01:03:21.460937 | orchestrator | 2026-03-07 01:03:21.460944 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-07 01:03:21.460952 | orchestrator | Saturday 07 March 2026 01:01:44 +0000 (0:00:00.128) 0:01:20.985 ******** 2026-03-07 01:03:21.460959 | orchestrator | 2026-03-07 01:03:21.460966 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-03-07 01:03:21.460974 | orchestrator | Saturday 07 March 2026 01:01:44 +0000 (0:00:00.128) 0:01:21.114 ******** 2026-03-07 01:03:21.461003 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:03:21.461016 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:03:21.461029 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:03:21.461042 | orchestrator | 2026-03-07 01:03:21.461054 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-03-07 01:03:21.461066 | orchestrator | Saturday 07 March 2026 01:02:03 +0000 (0:00:19.275) 0:01:40.389 ******** 2026-03-07 01:03:21.461077 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:03:21.461084 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:03:21.461092 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:03:21.461099 | orchestrator | 2026-03-07 01:03:21.461106 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-03-07 01:03:21.461120 | orchestrator | Saturday 07 March 2026 01:02:14 +0000 (0:00:10.496) 0:01:50.886 ******** 2026-03-07 01:03:21.461127 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:03:21.461134 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:03:21.461142 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:03:21.461149 | orchestrator | 2026-03-07 01:03:21.461156 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-07 01:03:21.461164 | orchestrator | Saturday 07 March 2026 01:02:22 +0000 (0:00:08.064) 0:01:58.951 ******** 2026-03-07 01:03:21.461171 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:03:21.461179 | orchestrator | 2026-03-07 01:03:21.461186 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-03-07 01:03:21.461193 | orchestrator | Saturday 07 March 2026 01:02:23 +0000 (0:00:00.824) 0:01:59.776 ******** 2026-03-07 01:03:21.461201 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:03:21.461208 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:03:21.461215 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:03:21.461223 | orchestrator | 2026-03-07 01:03:21.461230 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-03-07 01:03:21.461242 | orchestrator | Saturday 07 March 2026 01:02:24 +0000 (0:00:00.853) 0:02:00.629 ******** 2026-03-07 01:03:21.461249 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:03:21.461256 | orchestrator | 2026-03-07 01:03:21.461263 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-03-07 01:03:21.461271 | orchestrator | Saturday 07 March 2026 01:02:25 +0000 (0:00:01.776) 0:02:02.405 ******** 2026-03-07 01:03:21.461279 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-03-07 01:03:21.461286 | orchestrator | 2026-03-07 01:03:21.461293 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-03-07 01:03:21.461301 | orchestrator | Saturday 07 March 2026 01:02:39 +0000 (0:00:13.239) 0:02:15.644 ******** 2026-03-07 01:03:21.461309 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-03-07 01:03:21.461316 | orchestrator | 2026-03-07 01:03:21.461323 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-03-07 01:03:21.461330 | orchestrator | Saturday 07 March 2026 01:03:06 +0000 (0:00:26.944) 0:02:42.589 ******** 2026-03-07 01:03:21.461337 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-03-07 01:03:21.461345 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-03-07 01:03:21.461352 | orchestrator | 2026-03-07 01:03:21.461360 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-03-07 01:03:21.461367 | orchestrator | Saturday 07 March 2026 01:03:13 +0000 (0:00:07.268) 0:02:49.857 ******** 2026-03-07 01:03:21.461374 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:03:21.461382 | orchestrator | 2026-03-07 01:03:21.461389 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-03-07 01:03:21.461396 | orchestrator | Saturday 07 March 2026 01:03:13 +0000 (0:00:00.186) 0:02:50.044 ******** 2026-03-07 01:03:21.461404 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:03:21.461411 | orchestrator | 2026-03-07 01:03:21.461418 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-03-07 01:03:21.461426 | orchestrator | Saturday 07 March 2026 01:03:13 +0000 (0:00:00.267) 0:02:50.312 ******** 2026-03-07 01:03:21.461433 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:03:21.461441 | orchestrator | 2026-03-07 01:03:21.461448 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-03-07 01:03:21.461456 | orchestrator | Saturday 07 March 2026 01:03:14 +0000 (0:00:00.302) 0:02:50.614 ******** 2026-03-07 01:03:21.461463 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:03:21.461470 | orchestrator | 2026-03-07 01:03:21.461478 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-03-07 01:03:21.461490 | orchestrator | Saturday 07 March 2026 01:03:14 +0000 (0:00:00.765) 0:02:51.380 ******** 2026-03-07 01:03:21.461497 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:03:21.461505 | orchestrator | 2026-03-07 01:03:21.461512 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-07 01:03:21.461520 | orchestrator | Saturday 07 March 2026 01:03:18 +0000 (0:00:03.625) 0:02:55.006 ******** 2026-03-07 01:03:21.461527 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:03:21.461541 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:03:21.461549 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:03:21.461557 | orchestrator | 2026-03-07 01:03:21.461564 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 01:03:21.461573 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-07 01:03:21.461581 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-07 01:03:21.461589 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-07 01:03:21.461596 | orchestrator | 2026-03-07 01:03:21.461603 | orchestrator | 2026-03-07 01:03:21.461611 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 01:03:21.461618 | orchestrator | Saturday 07 March 2026 01:03:19 +0000 (0:00:00.725) 0:02:55.731 ******** 2026-03-07 01:03:21.461626 | orchestrator | =============================================================================== 2026-03-07 01:03:21.461633 | orchestrator | service-ks-register : keystone | Creating services --------------------- 26.94s 2026-03-07 01:03:21.461640 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 19.28s 2026-03-07 01:03:21.461647 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 16.08s 2026-03-07 01:03:21.461654 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 13.24s 2026-03-07 01:03:21.461662 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 11.75s 2026-03-07 01:03:21.461669 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 10.50s 2026-03-07 01:03:21.461676 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.16s 2026-03-07 01:03:21.461683 | orchestrator | keystone : Restart keystone container ----------------------------------- 8.06s 2026-03-07 01:03:21.461691 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.27s 2026-03-07 01:03:21.461698 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 6.02s 2026-03-07 01:03:21.461706 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.84s 2026-03-07 01:03:21.461713 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.80s 2026-03-07 01:03:21.461721 | orchestrator | keystone : Creating default user role ----------------------------------- 3.63s 2026-03-07 01:03:21.461732 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.14s 2026-03-07 01:03:21.461740 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.49s 2026-03-07 01:03:21.461747 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.42s 2026-03-07 01:03:21.461755 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.39s 2026-03-07 01:03:21.461762 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.93s 2026-03-07 01:03:21.461769 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.83s 2026-03-07 01:03:21.461777 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.78s 2026-03-07 01:03:21.461785 | orchestrator | 2026-03-07 01:03:21 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:03:21.461797 | orchestrator | 2026-03-07 01:03:21 | INFO  | Task b3ae0acc-eae9-44d9-9bf6-090a2b33ed91 is in state STARTED 2026-03-07 01:03:21.461805 | orchestrator | 2026-03-07 01:03:21 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:03:21.461812 | orchestrator | 2026-03-07 01:03:21 | INFO  | Task 581d95b4-6492-4051-a36c-c57f8ea8fbcb is in state STARTED 2026-03-07 01:03:21.461820 | orchestrator | 2026-03-07 01:03:21 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:03:24.508462 | orchestrator | 2026-03-07 01:03:24 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:03:24.508817 | orchestrator | 2026-03-07 01:03:24 | INFO  | Task b3ae0acc-eae9-44d9-9bf6-090a2b33ed91 is in state STARTED 2026-03-07 01:03:24.515201 | orchestrator | 2026-03-07 01:03:24 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:03:24.515634 | orchestrator | 2026-03-07 01:03:24 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:03:24.516759 | orchestrator | 2026-03-07 01:03:24 | INFO  | Task 581d95b4-6492-4051-a36c-c57f8ea8fbcb is in state STARTED 2026-03-07 01:03:24.516837 | orchestrator | 2026-03-07 01:03:24 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:03:27.560397 | orchestrator | 2026-03-07 01:03:27 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:03:27.560478 | orchestrator | 2026-03-07 01:03:27 | INFO  | Task b3ae0acc-eae9-44d9-9bf6-090a2b33ed91 is in state STARTED 2026-03-07 01:03:27.560647 | orchestrator | 2026-03-07 01:03:27 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:03:27.560769 | orchestrator | 2026-03-07 01:03:27 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:03:27.561978 | orchestrator | 2026-03-07 01:03:27 | INFO  | Task 581d95b4-6492-4051-a36c-c57f8ea8fbcb is in state STARTED 2026-03-07 01:03:27.562140 | orchestrator | 2026-03-07 01:03:27 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:03:30.598954 | orchestrator | 2026-03-07 01:03:30 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:03:30.600476 | orchestrator | 2026-03-07 01:03:30 | INFO  | Task b3ae0acc-eae9-44d9-9bf6-090a2b33ed91 is in state STARTED 2026-03-07 01:03:30.601951 | orchestrator | 2026-03-07 01:03:30 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:03:30.603490 | orchestrator | 2026-03-07 01:03:30 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:03:30.604685 | orchestrator | 2026-03-07 01:03:30 | INFO  | Task 581d95b4-6492-4051-a36c-c57f8ea8fbcb is in state STARTED 2026-03-07 01:03:30.604719 | orchestrator | 2026-03-07 01:03:30 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:03:33.643569 | orchestrator | 2026-03-07 01:03:33 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:03:33.645586 | orchestrator | 2026-03-07 01:03:33 | INFO  | Task b3ae0acc-eae9-44d9-9bf6-090a2b33ed91 is in state STARTED 2026-03-07 01:03:33.647607 | orchestrator | 2026-03-07 01:03:33 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:03:33.649331 | orchestrator | 2026-03-07 01:03:33 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:03:33.651526 | orchestrator | 2026-03-07 01:03:33 | INFO  | Task 581d95b4-6492-4051-a36c-c57f8ea8fbcb is in state STARTED 2026-03-07 01:03:33.651606 | orchestrator | 2026-03-07 01:03:33 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:03:36.684117 | orchestrator | 2026-03-07 01:03:36 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:03:36.685657 | orchestrator | 2026-03-07 01:03:36 | INFO  | Task b3ae0acc-eae9-44d9-9bf6-090a2b33ed91 is in state STARTED 2026-03-07 01:03:36.686290 | orchestrator | 2026-03-07 01:03:36 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:03:36.687365 | orchestrator | 2026-03-07 01:03:36 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:03:36.688797 | orchestrator | 2026-03-07 01:03:36 | INFO  | Task 581d95b4-6492-4051-a36c-c57f8ea8fbcb is in state STARTED 2026-03-07 01:03:36.688829 | orchestrator | 2026-03-07 01:03:36 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:03:39.738669 | orchestrator | 2026-03-07 01:03:39 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:03:39.740493 | orchestrator | 2026-03-07 01:03:39 | INFO  | Task b3ae0acc-eae9-44d9-9bf6-090a2b33ed91 is in state STARTED 2026-03-07 01:03:39.743971 | orchestrator | 2026-03-07 01:03:39 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:03:39.747182 | orchestrator | 2026-03-07 01:03:39 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:03:39.747259 | orchestrator | 2026-03-07 01:03:39 | INFO  | Task 581d95b4-6492-4051-a36c-c57f8ea8fbcb is in state STARTED 2026-03-07 01:03:39.747275 | orchestrator | 2026-03-07 01:03:39 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:03:42.780986 | orchestrator | 2026-03-07 01:03:42 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:03:42.781111 | orchestrator | 2026-03-07 01:03:42 | INFO  | Task b3ae0acc-eae9-44d9-9bf6-090a2b33ed91 is in state STARTED 2026-03-07 01:03:42.781447 | orchestrator | 2026-03-07 01:03:42 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:03:42.782289 | orchestrator | 2026-03-07 01:03:42 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:03:42.782808 | orchestrator | 2026-03-07 01:03:42 | INFO  | Task 581d95b4-6492-4051-a36c-c57f8ea8fbcb is in state STARTED 2026-03-07 01:03:42.782839 | orchestrator | 2026-03-07 01:03:42 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:03:45.819210 | orchestrator | 2026-03-07 01:03:45 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:03:45.819499 | orchestrator | 2026-03-07 01:03:45 | INFO  | Task b3ae0acc-eae9-44d9-9bf6-090a2b33ed91 is in state STARTED 2026-03-07 01:03:45.820554 | orchestrator | 2026-03-07 01:03:45 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:03:45.821121 | orchestrator | 2026-03-07 01:03:45 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:03:45.826934 | orchestrator | 2026-03-07 01:03:45 | INFO  | Task 581d95b4-6492-4051-a36c-c57f8ea8fbcb is in state STARTED 2026-03-07 01:03:45.827083 | orchestrator | 2026-03-07 01:03:45 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:03:48.860130 | orchestrator | 2026-03-07 01:03:48 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:03:48.860504 | orchestrator | 2026-03-07 01:03:48 | INFO  | Task b3ae0acc-eae9-44d9-9bf6-090a2b33ed91 is in state STARTED 2026-03-07 01:03:48.861320 | orchestrator | 2026-03-07 01:03:48 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:03:48.862280 | orchestrator | 2026-03-07 01:03:48 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:03:48.862757 | orchestrator | 2026-03-07 01:03:48 | INFO  | Task 581d95b4-6492-4051-a36c-c57f8ea8fbcb is in state STARTED 2026-03-07 01:03:48.862819 | orchestrator | 2026-03-07 01:03:48 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:03:52.192238 | orchestrator | 2026-03-07 01:03:52 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:03:52.192332 | orchestrator | 2026-03-07 01:03:52 | INFO  | Task b3ae0acc-eae9-44d9-9bf6-090a2b33ed91 is in state STARTED 2026-03-07 01:03:52.192340 | orchestrator | 2026-03-07 01:03:52 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:03:52.192348 | orchestrator | 2026-03-07 01:03:52 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:03:52.192354 | orchestrator | 2026-03-07 01:03:52 | INFO  | Task 581d95b4-6492-4051-a36c-c57f8ea8fbcb is in state STARTED 2026-03-07 01:03:52.192361 | orchestrator | 2026-03-07 01:03:52 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:03:55.259567 | orchestrator | 2026-03-07 01:03:55 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:03:55.260921 | orchestrator | 2026-03-07 01:03:55 | INFO  | Task b3ae0acc-eae9-44d9-9bf6-090a2b33ed91 is in state STARTED 2026-03-07 01:03:55.267634 | orchestrator | 2026-03-07 01:03:55 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:03:55.268866 | orchestrator | 2026-03-07 01:03:55 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:03:55.270443 | orchestrator | 2026-03-07 01:03:55 | INFO  | Task 581d95b4-6492-4051-a36c-c57f8ea8fbcb is in state STARTED 2026-03-07 01:03:55.270629 | orchestrator | 2026-03-07 01:03:55 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:03:58.303213 | orchestrator | 2026-03-07 01:03:58 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:03:58.303905 | orchestrator | 2026-03-07 01:03:58 | INFO  | Task b3ae0acc-eae9-44d9-9bf6-090a2b33ed91 is in state STARTED 2026-03-07 01:03:58.305252 | orchestrator | 2026-03-07 01:03:58 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:03:58.307867 | orchestrator | 2026-03-07 01:03:58 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:03:58.310492 | orchestrator | 2026-03-07 01:03:58 | INFO  | Task 581d95b4-6492-4051-a36c-c57f8ea8fbcb is in state STARTED 2026-03-07 01:03:58.310575 | orchestrator | 2026-03-07 01:03:58 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:04:01.335097 | orchestrator | 2026-03-07 01:04:01 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:04:01.335221 | orchestrator | 2026-03-07 01:04:01 | INFO  | Task b3ae0acc-eae9-44d9-9bf6-090a2b33ed91 is in state STARTED 2026-03-07 01:04:01.335572 | orchestrator | 2026-03-07 01:04:01 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:04:01.336317 | orchestrator | 2026-03-07 01:04:01 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:04:01.337051 | orchestrator | 2026-03-07 01:04:01 | INFO  | Task 581d95b4-6492-4051-a36c-c57f8ea8fbcb is in state STARTED 2026-03-07 01:04:01.337291 | orchestrator | 2026-03-07 01:04:01 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:04:04.370527 | orchestrator | 2026-03-07 01:04:04 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:04:04.370643 | orchestrator | 2026-03-07 01:04:04 | INFO  | Task b3ae0acc-eae9-44d9-9bf6-090a2b33ed91 is in state STARTED 2026-03-07 01:04:04.371583 | orchestrator | 2026-03-07 01:04:04 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:04:04.373461 | orchestrator | 2026-03-07 01:04:04 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:04:04.374206 | orchestrator | 2026-03-07 01:04:04 | INFO  | Task 581d95b4-6492-4051-a36c-c57f8ea8fbcb is in state SUCCESS 2026-03-07 01:04:04.374461 | orchestrator | 2026-03-07 01:04:04 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:04:07.409142 | orchestrator | 2026-03-07 01:04:07 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:04:07.409681 | orchestrator | 2026-03-07 01:04:07 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:04:07.410744 | orchestrator | 2026-03-07 01:04:07 | INFO  | Task b3ae0acc-eae9-44d9-9bf6-090a2b33ed91 is in state STARTED 2026-03-07 01:04:07.411956 | orchestrator | 2026-03-07 01:04:07 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:04:07.413857 | orchestrator | 2026-03-07 01:04:07 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:04:07.413902 | orchestrator | 2026-03-07 01:04:07 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:04:10.455578 | orchestrator | 2026-03-07 01:04:10 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:04:10.460508 | orchestrator | 2026-03-07 01:04:10 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:04:10.461487 | orchestrator | 2026-03-07 01:04:10 | INFO  | Task b3ae0acc-eae9-44d9-9bf6-090a2b33ed91 is in state STARTED 2026-03-07 01:04:10.462710 | orchestrator | 2026-03-07 01:04:10 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:04:10.463874 | orchestrator | 2026-03-07 01:04:10 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:04:10.463913 | orchestrator | 2026-03-07 01:04:10 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:04:13.508120 | orchestrator | 2026-03-07 01:04:13 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:04:13.508236 | orchestrator | 2026-03-07 01:04:13 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:04:13.508796 | orchestrator | 2026-03-07 01:04:13 | INFO  | Task b3ae0acc-eae9-44d9-9bf6-090a2b33ed91 is in state STARTED 2026-03-07 01:04:13.509866 | orchestrator | 2026-03-07 01:04:13 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:04:13.510689 | orchestrator | 2026-03-07 01:04:13 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:04:13.511567 | orchestrator | 2026-03-07 01:04:13 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:04:16.550605 | orchestrator | 2026-03-07 01:04:16 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:04:16.550695 | orchestrator | 2026-03-07 01:04:16 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:04:16.550714 | orchestrator | 2026-03-07 01:04:16 | INFO  | Task b3ae0acc-eae9-44d9-9bf6-090a2b33ed91 is in state STARTED 2026-03-07 01:04:16.552400 | orchestrator | 2026-03-07 01:04:16 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:04:16.553322 | orchestrator | 2026-03-07 01:04:16 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:04:16.553823 | orchestrator | 2026-03-07 01:04:16 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:04:19.599710 | orchestrator | 2026-03-07 01:04:19 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:04:19.600222 | orchestrator | 2026-03-07 01:04:19 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:04:19.601769 | orchestrator | 2026-03-07 01:04:19 | INFO  | Task b3ae0acc-eae9-44d9-9bf6-090a2b33ed91 is in state STARTED 2026-03-07 01:04:19.602733 | orchestrator | 2026-03-07 01:04:19 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:04:19.603529 | orchestrator | 2026-03-07 01:04:19 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:04:19.603573 | orchestrator | 2026-03-07 01:04:19 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:04:22.674680 | orchestrator | 2026-03-07 01:04:22 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:04:22.674821 | orchestrator | 2026-03-07 01:04:22 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:04:22.674849 | orchestrator | 2026-03-07 01:04:22 | INFO  | Task b3ae0acc-eae9-44d9-9bf6-090a2b33ed91 is in state STARTED 2026-03-07 01:04:22.674869 | orchestrator | 2026-03-07 01:04:22 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:04:22.674887 | orchestrator | 2026-03-07 01:04:22 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:04:22.674922 | orchestrator | 2026-03-07 01:04:22 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:04:25.674791 | orchestrator | 2026-03-07 01:04:25 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:04:25.677760 | orchestrator | 2026-03-07 01:04:25 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:04:25.678749 | orchestrator | 2026-03-07 01:04:25 | INFO  | Task b3ae0acc-eae9-44d9-9bf6-090a2b33ed91 is in state STARTED 2026-03-07 01:04:25.679789 | orchestrator | 2026-03-07 01:04:25 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:04:25.680420 | orchestrator | 2026-03-07 01:04:25 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:04:25.680453 | orchestrator | 2026-03-07 01:04:25 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:04:28.717915 | orchestrator | 2026-03-07 01:04:28 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:04:28.718325 | orchestrator | 2026-03-07 01:04:28 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:04:28.719191 | orchestrator | 2026-03-07 01:04:28 | INFO  | Task b3ae0acc-eae9-44d9-9bf6-090a2b33ed91 is in state STARTED 2026-03-07 01:04:28.720084 | orchestrator | 2026-03-07 01:04:28 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:04:28.721319 | orchestrator | 2026-03-07 01:04:28 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:04:28.721380 | orchestrator | 2026-03-07 01:04:28 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:04:31.745509 | orchestrator | 2026-03-07 01:04:31 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:04:31.745816 | orchestrator | 2026-03-07 01:04:31 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:04:31.746731 | orchestrator | 2026-03-07 01:04:31 | INFO  | Task b3ae0acc-eae9-44d9-9bf6-090a2b33ed91 is in state STARTED 2026-03-07 01:04:31.748262 | orchestrator | 2026-03-07 01:04:31 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:04:31.748926 | orchestrator | 2026-03-07 01:04:31 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:04:31.748979 | orchestrator | 2026-03-07 01:04:31 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:04:34.773114 | orchestrator | 2026-03-07 01:04:34 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:04:34.773716 | orchestrator | 2026-03-07 01:04:34 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:04:34.774849 | orchestrator | 2026-03-07 01:04:34 | INFO  | Task b3ae0acc-eae9-44d9-9bf6-090a2b33ed91 is in state STARTED 2026-03-07 01:04:34.775356 | orchestrator | 2026-03-07 01:04:34 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:04:34.775811 | orchestrator | 2026-03-07 01:04:34 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:04:34.775837 | orchestrator | 2026-03-07 01:04:34 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:04:37.801528 | orchestrator | 2026-03-07 01:04:37 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:04:37.802550 | orchestrator | 2026-03-07 01:04:37 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:04:37.803071 | orchestrator | 2026-03-07 01:04:37 | INFO  | Task b3ae0acc-eae9-44d9-9bf6-090a2b33ed91 is in state STARTED 2026-03-07 01:04:37.803846 | orchestrator | 2026-03-07 01:04:37 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:04:37.804469 | orchestrator | 2026-03-07 01:04:37 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:04:37.804492 | orchestrator | 2026-03-07 01:04:37 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:04:40.841756 | orchestrator | 2026-03-07 01:04:40 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:04:40.842411 | orchestrator | 2026-03-07 01:04:40 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:04:40.843391 | orchestrator | 2026-03-07 01:04:40 | INFO  | Task b3ae0acc-eae9-44d9-9bf6-090a2b33ed91 is in state STARTED 2026-03-07 01:04:40.845294 | orchestrator | 2026-03-07 01:04:40 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:04:40.845916 | orchestrator | 2026-03-07 01:04:40 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:04:40.846098 | orchestrator | 2026-03-07 01:04:40 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:04:43.867655 | orchestrator | 2026-03-07 01:04:43 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:04:43.868075 | orchestrator | 2026-03-07 01:04:43 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:04:43.868639 | orchestrator | 2026-03-07 01:04:43 | INFO  | Task b3ae0acc-eae9-44d9-9bf6-090a2b33ed91 is in state STARTED 2026-03-07 01:04:43.869560 | orchestrator | 2026-03-07 01:04:43 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:04:43.870275 | orchestrator | 2026-03-07 01:04:43 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:04:43.870294 | orchestrator | 2026-03-07 01:04:43 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:04:46.902635 | orchestrator | 2026-03-07 01:04:46 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:04:46.903976 | orchestrator | 2026-03-07 01:04:46 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:04:46.904341 | orchestrator | 2026-03-07 01:04:46 | INFO  | Task b3ae0acc-eae9-44d9-9bf6-090a2b33ed91 is in state STARTED 2026-03-07 01:04:46.905333 | orchestrator | 2026-03-07 01:04:46 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:04:46.906252 | orchestrator | 2026-03-07 01:04:46 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:04:46.906297 | orchestrator | 2026-03-07 01:04:46 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:04:49.967474 | orchestrator | 2026-03-07 01:04:49 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:04:49.967751 | orchestrator | 2026-03-07 01:04:49 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:04:49.968455 | orchestrator | 2026-03-07 01:04:49 | INFO  | Task b3ae0acc-eae9-44d9-9bf6-090a2b33ed91 is in state SUCCESS 2026-03-07 01:04:49.969170 | orchestrator | 2026-03-07 01:04:49.969200 | orchestrator | 2026-03-07 01:04:49.969209 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-07 01:04:49.969218 | orchestrator | 2026-03-07 01:04:49.969225 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-07 01:04:49.969233 | orchestrator | Saturday 07 March 2026 01:03:22 +0000 (0:00:00.466) 0:00:00.467 ******** 2026-03-07 01:04:49.969241 | orchestrator | ok: [testbed-manager] 2026-03-07 01:04:49.969249 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:04:49.969256 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:04:49.969264 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:04:49.969270 | orchestrator | ok: [testbed-node-3] 2026-03-07 01:04:49.969277 | orchestrator | ok: [testbed-node-4] 2026-03-07 01:04:49.969284 | orchestrator | ok: [testbed-node-5] 2026-03-07 01:04:49.969291 | orchestrator | 2026-03-07 01:04:49.969298 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-07 01:04:49.969305 | orchestrator | Saturday 07 March 2026 01:03:24 +0000 (0:00:02.045) 0:00:02.512 ******** 2026-03-07 01:04:49.969313 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-03-07 01:04:49.969320 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-03-07 01:04:49.969328 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-03-07 01:04:49.969335 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-03-07 01:04:49.969342 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-03-07 01:04:49.969349 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-03-07 01:04:49.969356 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-03-07 01:04:49.969363 | orchestrator | 2026-03-07 01:04:49.969370 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-07 01:04:49.969376 | orchestrator | 2026-03-07 01:04:49.969383 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-03-07 01:04:49.969390 | orchestrator | Saturday 07 March 2026 01:03:26 +0000 (0:00:02.008) 0:00:04.520 ******** 2026-03-07 01:04:49.969398 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 01:04:49.969406 | orchestrator | 2026-03-07 01:04:49.969413 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-03-07 01:04:49.969421 | orchestrator | Saturday 07 March 2026 01:03:28 +0000 (0:00:02.106) 0:00:06.626 ******** 2026-03-07 01:04:49.969428 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-03-07 01:04:49.969435 | orchestrator | 2026-03-07 01:04:49.969442 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-03-07 01:04:49.969449 | orchestrator | Saturday 07 March 2026 01:03:32 +0000 (0:00:03.993) 0:00:10.620 ******** 2026-03-07 01:04:49.969456 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-03-07 01:04:49.969464 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-03-07 01:04:49.969471 | orchestrator | 2026-03-07 01:04:49.969500 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-03-07 01:04:49.969507 | orchestrator | Saturday 07 March 2026 01:03:38 +0000 (0:00:06.533) 0:00:17.154 ******** 2026-03-07 01:04:49.969515 | orchestrator | ok: [testbed-manager] => (item=service) 2026-03-07 01:04:49.969521 | orchestrator | 2026-03-07 01:04:49.969528 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-03-07 01:04:49.969536 | orchestrator | Saturday 07 March 2026 01:03:42 +0000 (0:00:03.880) 0:00:21.034 ******** 2026-03-07 01:04:49.969547 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-07 01:04:49.969559 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-03-07 01:04:49.969572 | orchestrator | 2026-03-07 01:04:49.969692 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-03-07 01:04:49.969705 | orchestrator | Saturday 07 March 2026 01:03:48 +0000 (0:00:06.124) 0:00:27.158 ******** 2026-03-07 01:04:49.969717 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-03-07 01:04:49.969728 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-03-07 01:04:49.969739 | orchestrator | 2026-03-07 01:04:49.969746 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-03-07 01:04:49.969753 | orchestrator | Saturday 07 March 2026 01:03:57 +0000 (0:00:08.561) 0:00:35.720 ******** 2026-03-07 01:04:49.969760 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-03-07 01:04:49.969767 | orchestrator | 2026-03-07 01:04:49.969774 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 01:04:49.969781 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 01:04:49.969789 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 01:04:49.969810 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 01:04:49.969818 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 01:04:49.969825 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 01:04:49.969846 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 01:04:49.969854 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 01:04:49.969860 | orchestrator | 2026-03-07 01:04:49.969867 | orchestrator | 2026-03-07 01:04:49.969874 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 01:04:49.969881 | orchestrator | Saturday 07 March 2026 01:04:02 +0000 (0:00:05.286) 0:00:41.007 ******** 2026-03-07 01:04:49.969888 | orchestrator | =============================================================================== 2026-03-07 01:04:49.969895 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 8.56s 2026-03-07 01:04:49.969902 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.53s 2026-03-07 01:04:49.969909 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 6.12s 2026-03-07 01:04:49.969916 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.29s 2026-03-07 01:04:49.969923 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.99s 2026-03-07 01:04:49.969931 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.88s 2026-03-07 01:04:49.969937 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 2.11s 2026-03-07 01:04:49.969944 | orchestrator | Group hosts based on Kolla action --------------------------------------- 2.05s 2026-03-07 01:04:49.969960 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.01s 2026-03-07 01:04:49.969967 | orchestrator | 2026-03-07 01:04:49.969974 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-07 01:04:49.969981 | orchestrator | 2.16.14 2026-03-07 01:04:49.969988 | orchestrator | 2026-03-07 01:04:49.969995 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-03-07 01:04:49.970002 | orchestrator | 2026-03-07 01:04:49.970009 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-03-07 01:04:49.970081 | orchestrator | Saturday 07 March 2026 01:03:11 +0000 (0:00:00.303) 0:00:00.303 ******** 2026-03-07 01:04:49.970089 | orchestrator | changed: [testbed-manager] 2026-03-07 01:04:49.970096 | orchestrator | 2026-03-07 01:04:49.970103 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-03-07 01:04:49.970110 | orchestrator | Saturday 07 March 2026 01:03:12 +0000 (0:00:01.763) 0:00:02.066 ******** 2026-03-07 01:04:49.970117 | orchestrator | changed: [testbed-manager] 2026-03-07 01:04:49.970124 | orchestrator | 2026-03-07 01:04:49.970131 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-03-07 01:04:49.970138 | orchestrator | Saturday 07 March 2026 01:03:14 +0000 (0:00:01.179) 0:00:03.246 ******** 2026-03-07 01:04:49.970145 | orchestrator | changed: [testbed-manager] 2026-03-07 01:04:49.970152 | orchestrator | 2026-03-07 01:04:49.970159 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-03-07 01:04:49.970166 | orchestrator | Saturday 07 March 2026 01:03:15 +0000 (0:00:01.232) 0:00:04.479 ******** 2026-03-07 01:04:49.970173 | orchestrator | changed: [testbed-manager] 2026-03-07 01:04:49.970180 | orchestrator | 2026-03-07 01:04:49.970187 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-03-07 01:04:49.970194 | orchestrator | Saturday 07 March 2026 01:03:17 +0000 (0:00:01.843) 0:00:06.323 ******** 2026-03-07 01:04:49.970200 | orchestrator | changed: [testbed-manager] 2026-03-07 01:04:49.970207 | orchestrator | 2026-03-07 01:04:49.970214 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-03-07 01:04:49.970220 | orchestrator | Saturday 07 March 2026 01:03:18 +0000 (0:00:01.211) 0:00:07.535 ******** 2026-03-07 01:04:49.970227 | orchestrator | changed: [testbed-manager] 2026-03-07 01:04:49.970234 | orchestrator | 2026-03-07 01:04:49.970241 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-03-07 01:04:49.970248 | orchestrator | Saturday 07 March 2026 01:03:20 +0000 (0:00:01.930) 0:00:09.465 ******** 2026-03-07 01:04:49.970255 | orchestrator | changed: [testbed-manager] 2026-03-07 01:04:49.970262 | orchestrator | 2026-03-07 01:04:49.970268 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-03-07 01:04:49.970275 | orchestrator | Saturday 07 March 2026 01:03:22 +0000 (0:00:01.959) 0:00:11.424 ******** 2026-03-07 01:04:49.970282 | orchestrator | changed: [testbed-manager] 2026-03-07 01:04:49.970289 | orchestrator | 2026-03-07 01:04:49.970296 | orchestrator | TASK [Create admin user] ******************************************************* 2026-03-07 01:04:49.970303 | orchestrator | Saturday 07 March 2026 01:03:23 +0000 (0:00:01.573) 0:00:12.998 ******** 2026-03-07 01:04:49.970311 | orchestrator | changed: [testbed-manager] 2026-03-07 01:04:49.970320 | orchestrator | 2026-03-07 01:04:49.970328 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-03-07 01:04:49.970337 | orchestrator | Saturday 07 March 2026 01:04:24 +0000 (0:01:00.476) 0:01:13.474 ******** 2026-03-07 01:04:49.970345 | orchestrator | skipping: [testbed-manager] 2026-03-07 01:04:49.970353 | orchestrator | 2026-03-07 01:04:49.970362 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-07 01:04:49.970369 | orchestrator | 2026-03-07 01:04:49.970378 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-07 01:04:49.970387 | orchestrator | Saturday 07 March 2026 01:04:24 +0000 (0:00:00.163) 0:01:13.638 ******** 2026-03-07 01:04:49.970395 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:04:49.970403 | orchestrator | 2026-03-07 01:04:49.970422 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-07 01:04:49.970431 | orchestrator | 2026-03-07 01:04:49.970439 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-07 01:04:49.970447 | orchestrator | Saturday 07 March 2026 01:04:36 +0000 (0:00:11.779) 0:01:25.418 ******** 2026-03-07 01:04:49.970455 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:04:49.970463 | orchestrator | 2026-03-07 01:04:49.970472 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-07 01:04:49.970480 | orchestrator | 2026-03-07 01:04:49.970488 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-07 01:04:49.970503 | orchestrator | Saturday 07 March 2026 01:04:37 +0000 (0:00:01.442) 0:01:26.860 ******** 2026-03-07 01:04:49.970512 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:04:49.970520 | orchestrator | 2026-03-07 01:04:49.970528 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 01:04:49.970536 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-07 01:04:49.970545 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 01:04:49.970553 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 01:04:49.970561 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 01:04:49.970569 | orchestrator | 2026-03-07 01:04:49.970578 | orchestrator | 2026-03-07 01:04:49.970586 | orchestrator | 2026-03-07 01:04:49.970595 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 01:04:49.970603 | orchestrator | Saturday 07 March 2026 01:04:48 +0000 (0:00:11.231) 0:01:38.091 ******** 2026-03-07 01:04:49.970611 | orchestrator | =============================================================================== 2026-03-07 01:04:49.970619 | orchestrator | Create admin user ------------------------------------------------------ 60.48s 2026-03-07 01:04:49.970628 | orchestrator | Restart ceph manager service ------------------------------------------- 24.45s 2026-03-07 01:04:49.970636 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.96s 2026-03-07 01:04:49.970644 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.93s 2026-03-07 01:04:49.970653 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.84s 2026-03-07 01:04:49.970661 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.76s 2026-03-07 01:04:49.970670 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.57s 2026-03-07 01:04:49.970678 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.23s 2026-03-07 01:04:49.970686 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.21s 2026-03-07 01:04:49.970694 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.18s 2026-03-07 01:04:49.970700 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.16s 2026-03-07 01:04:49.970707 | orchestrator | 2026-03-07 01:04:49 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:04:49.971513 | orchestrator | 2026-03-07 01:04:49 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:04:49.971592 | orchestrator | 2026-03-07 01:04:49 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:04:53.028526 | orchestrator | 2026-03-07 01:04:53 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:04:53.028623 | orchestrator | 2026-03-07 01:04:53 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:04:53.028657 | orchestrator | 2026-03-07 01:04:53 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:04:53.028665 | orchestrator | 2026-03-07 01:04:53 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:04:53.028672 | orchestrator | 2026-03-07 01:04:53 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:04:56.039686 | orchestrator | 2026-03-07 01:04:56 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:04:56.039834 | orchestrator | 2026-03-07 01:04:56 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:04:56.040607 | orchestrator | 2026-03-07 01:04:56 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:04:56.041496 | orchestrator | 2026-03-07 01:04:56 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:04:56.041522 | orchestrator | 2026-03-07 01:04:56 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:04:59.073110 | orchestrator | 2026-03-07 01:04:59 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:04:59.078696 | orchestrator | 2026-03-07 01:04:59 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:04:59.079418 | orchestrator | 2026-03-07 01:04:59 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:04:59.080695 | orchestrator | 2026-03-07 01:04:59 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:04:59.080766 | orchestrator | 2026-03-07 01:04:59 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:05:02.104248 | orchestrator | 2026-03-07 01:05:02 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:05:02.104321 | orchestrator | 2026-03-07 01:05:02 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:05:02.104328 | orchestrator | 2026-03-07 01:05:02 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:05:02.104333 | orchestrator | 2026-03-07 01:05:02 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:05:02.104338 | orchestrator | 2026-03-07 01:05:02 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:05:05.150219 | orchestrator | 2026-03-07 01:05:05 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:05:05.150335 | orchestrator | 2026-03-07 01:05:05 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:05:05.150856 | orchestrator | 2026-03-07 01:05:05 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:05:05.151448 | orchestrator | 2026-03-07 01:05:05 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:05:05.151566 | orchestrator | 2026-03-07 01:05:05 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:05:08.184596 | orchestrator | 2026-03-07 01:05:08 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:05:08.184689 | orchestrator | 2026-03-07 01:05:08 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:05:08.184918 | orchestrator | 2026-03-07 01:05:08 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:05:08.185791 | orchestrator | 2026-03-07 01:05:08 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:05:08.185833 | orchestrator | 2026-03-07 01:05:08 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:05:11.255468 | orchestrator | 2026-03-07 01:05:11 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:05:11.255968 | orchestrator | 2026-03-07 01:05:11 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:05:11.257191 | orchestrator | 2026-03-07 01:05:11 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:05:11.258352 | orchestrator | 2026-03-07 01:05:11 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:05:11.258569 | orchestrator | 2026-03-07 01:05:11 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:05:14.306212 | orchestrator | 2026-03-07 01:05:14 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:05:14.306854 | orchestrator | 2026-03-07 01:05:14 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:05:14.308123 | orchestrator | 2026-03-07 01:05:14 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:05:14.309525 | orchestrator | 2026-03-07 01:05:14 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:05:14.309572 | orchestrator | 2026-03-07 01:05:14 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:05:17.355881 | orchestrator | 2026-03-07 01:05:17 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:05:17.357024 | orchestrator | 2026-03-07 01:05:17 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:05:17.358749 | orchestrator | 2026-03-07 01:05:17 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:05:17.360150 | orchestrator | 2026-03-07 01:05:17 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:05:17.360208 | orchestrator | 2026-03-07 01:05:17 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:05:20.451405 | orchestrator | 2026-03-07 01:05:20 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:05:20.452138 | orchestrator | 2026-03-07 01:05:20 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:05:20.457574 | orchestrator | 2026-03-07 01:05:20 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:05:20.461678 | orchestrator | 2026-03-07 01:05:20 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:05:20.461740 | orchestrator | 2026-03-07 01:05:20 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:05:23.512476 | orchestrator | 2026-03-07 01:05:23 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:05:23.514853 | orchestrator | 2026-03-07 01:05:23 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:05:23.519764 | orchestrator | 2026-03-07 01:05:23 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:05:23.522993 | orchestrator | 2026-03-07 01:05:23 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:05:23.523422 | orchestrator | 2026-03-07 01:05:23 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:05:26.576557 | orchestrator | 2026-03-07 01:05:26 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:05:26.580471 | orchestrator | 2026-03-07 01:05:26 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:05:26.581836 | orchestrator | 2026-03-07 01:05:26 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:05:26.583452 | orchestrator | 2026-03-07 01:05:26 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:05:26.583520 | orchestrator | 2026-03-07 01:05:26 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:05:29.627724 | orchestrator | 2026-03-07 01:05:29 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:05:29.629733 | orchestrator | 2026-03-07 01:05:29 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:05:29.632087 | orchestrator | 2026-03-07 01:05:29 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:05:29.635277 | orchestrator | 2026-03-07 01:05:29 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:05:29.635798 | orchestrator | 2026-03-07 01:05:29 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:05:32.686262 | orchestrator | 2026-03-07 01:05:32 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:05:32.687207 | orchestrator | 2026-03-07 01:05:32 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:05:32.688555 | orchestrator | 2026-03-07 01:05:32 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:05:32.690293 | orchestrator | 2026-03-07 01:05:32 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:05:32.690382 | orchestrator | 2026-03-07 01:05:32 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:05:35.763855 | orchestrator | 2026-03-07 01:05:35 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:05:35.766311 | orchestrator | 2026-03-07 01:05:35 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:05:35.768008 | orchestrator | 2026-03-07 01:05:35 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:05:35.769570 | orchestrator | 2026-03-07 01:05:35 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:05:35.769675 | orchestrator | 2026-03-07 01:05:35 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:05:38.831277 | orchestrator | 2026-03-07 01:05:38 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:05:38.833430 | orchestrator | 2026-03-07 01:05:38 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:05:38.833948 | orchestrator | 2026-03-07 01:05:38 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:05:38.835125 | orchestrator | 2026-03-07 01:05:38 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:05:38.835178 | orchestrator | 2026-03-07 01:05:38 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:05:41.870444 | orchestrator | 2026-03-07 01:05:41 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:05:41.870704 | orchestrator | 2026-03-07 01:05:41 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:05:41.872286 | orchestrator | 2026-03-07 01:05:41 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:05:41.873432 | orchestrator | 2026-03-07 01:05:41 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:05:41.873511 | orchestrator | 2026-03-07 01:05:41 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:05:44.926244 | orchestrator | 2026-03-07 01:05:44 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:05:44.927310 | orchestrator | 2026-03-07 01:05:44 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:05:44.928460 | orchestrator | 2026-03-07 01:05:44 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:05:44.929977 | orchestrator | 2026-03-07 01:05:44 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:05:44.930042 | orchestrator | 2026-03-07 01:05:44 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:05:47.973747 | orchestrator | 2026-03-07 01:05:47 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:05:47.974715 | orchestrator | 2026-03-07 01:05:47 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:05:47.975888 | orchestrator | 2026-03-07 01:05:47 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:05:47.978758 | orchestrator | 2026-03-07 01:05:47 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:05:47.978828 | orchestrator | 2026-03-07 01:05:47 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:05:51.020503 | orchestrator | 2026-03-07 01:05:51 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:05:51.021778 | orchestrator | 2026-03-07 01:05:51 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:05:51.023620 | orchestrator | 2026-03-07 01:05:51 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:05:51.024734 | orchestrator | 2026-03-07 01:05:51 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:05:51.024887 | orchestrator | 2026-03-07 01:05:51 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:05:54.088367 | orchestrator | 2026-03-07 01:05:54 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:05:54.089856 | orchestrator | 2026-03-07 01:05:54 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:05:54.090283 | orchestrator | 2026-03-07 01:05:54 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:05:54.091332 | orchestrator | 2026-03-07 01:05:54 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:05:54.091367 | orchestrator | 2026-03-07 01:05:54 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:05:57.144011 | orchestrator | 2026-03-07 01:05:57 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:05:57.144147 | orchestrator | 2026-03-07 01:05:57 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:05:57.144156 | orchestrator | 2026-03-07 01:05:57 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:05:57.145129 | orchestrator | 2026-03-07 01:05:57 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:05:57.145176 | orchestrator | 2026-03-07 01:05:57 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:06:00.184215 | orchestrator | 2026-03-07 01:06:00 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:06:00.186551 | orchestrator | 2026-03-07 01:06:00 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:06:00.188514 | orchestrator | 2026-03-07 01:06:00 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:06:00.191667 | orchestrator | 2026-03-07 01:06:00 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:06:00.191749 | orchestrator | 2026-03-07 01:06:00 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:06:03.232057 | orchestrator | 2026-03-07 01:06:03 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:06:03.233402 | orchestrator | 2026-03-07 01:06:03 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:06:03.235046 | orchestrator | 2026-03-07 01:06:03 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:06:03.236385 | orchestrator | 2026-03-07 01:06:03 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:06:03.236480 | orchestrator | 2026-03-07 01:06:03 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:06:06.280493 | orchestrator | 2026-03-07 01:06:06 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:06:06.281321 | orchestrator | 2026-03-07 01:06:06 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:06:06.283337 | orchestrator | 2026-03-07 01:06:06 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:06:06.285343 | orchestrator | 2026-03-07 01:06:06 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:06:06.285381 | orchestrator | 2026-03-07 01:06:06 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:06:09.315481 | orchestrator | 2026-03-07 01:06:09 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:06:09.315713 | orchestrator | 2026-03-07 01:06:09 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:06:09.316443 | orchestrator | 2026-03-07 01:06:09 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:06:09.317095 | orchestrator | 2026-03-07 01:06:09 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:06:09.317433 | orchestrator | 2026-03-07 01:06:09 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:06:12.384161 | orchestrator | 2026-03-07 01:06:12 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:06:12.384491 | orchestrator | 2026-03-07 01:06:12 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:06:12.385435 | orchestrator | 2026-03-07 01:06:12 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:06:12.387314 | orchestrator | 2026-03-07 01:06:12 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:06:12.387365 | orchestrator | 2026-03-07 01:06:12 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:06:15.453149 | orchestrator | 2026-03-07 01:06:15 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:06:15.456631 | orchestrator | 2026-03-07 01:06:15 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:06:15.459529 | orchestrator | 2026-03-07 01:06:15 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:06:15.461094 | orchestrator | 2026-03-07 01:06:15 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:06:15.461206 | orchestrator | 2026-03-07 01:06:15 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:06:18.501387 | orchestrator | 2026-03-07 01:06:18 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:06:18.501785 | orchestrator | 2026-03-07 01:06:18 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:06:18.502402 | orchestrator | 2026-03-07 01:06:18 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:06:18.505350 | orchestrator | 2026-03-07 01:06:18 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:06:18.505424 | orchestrator | 2026-03-07 01:06:18 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:06:21.548523 | orchestrator | 2026-03-07 01:06:21 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:06:21.549461 | orchestrator | 2026-03-07 01:06:21 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:06:21.550669 | orchestrator | 2026-03-07 01:06:21 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:06:21.551710 | orchestrator | 2026-03-07 01:06:21 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:06:21.551750 | orchestrator | 2026-03-07 01:06:21 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:06:24.581319 | orchestrator | 2026-03-07 01:06:24 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:06:24.581413 | orchestrator | 2026-03-07 01:06:24 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:06:24.581814 | orchestrator | 2026-03-07 01:06:24 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:06:24.582697 | orchestrator | 2026-03-07 01:06:24 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:06:24.582787 | orchestrator | 2026-03-07 01:06:24 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:06:27.624588 | orchestrator | 2026-03-07 01:06:27 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:06:27.626449 | orchestrator | 2026-03-07 01:06:27 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:06:27.628273 | orchestrator | 2026-03-07 01:06:27 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:06:27.629806 | orchestrator | 2026-03-07 01:06:27 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:06:27.629860 | orchestrator | 2026-03-07 01:06:27 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:06:30.694328 | orchestrator | 2026-03-07 01:06:30 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:06:30.694410 | orchestrator | 2026-03-07 01:06:30 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:06:30.694419 | orchestrator | 2026-03-07 01:06:30 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:06:30.694424 | orchestrator | 2026-03-07 01:06:30 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:06:30.694432 | orchestrator | 2026-03-07 01:06:30 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:06:33.879491 | orchestrator | 2026-03-07 01:06:33 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:06:33.879566 | orchestrator | 2026-03-07 01:06:33 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:06:33.879573 | orchestrator | 2026-03-07 01:06:33 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:06:33.879578 | orchestrator | 2026-03-07 01:06:33 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:06:33.879582 | orchestrator | 2026-03-07 01:06:33 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:06:36.909617 | orchestrator | 2026-03-07 01:06:36 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:06:36.911986 | orchestrator | 2026-03-07 01:06:36 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:06:36.913492 | orchestrator | 2026-03-07 01:06:36 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:06:36.915133 | orchestrator | 2026-03-07 01:06:36 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:06:36.915190 | orchestrator | 2026-03-07 01:06:36 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:06:39.971528 | orchestrator | 2026-03-07 01:06:39 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:06:39.972286 | orchestrator | 2026-03-07 01:06:39 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:06:39.973113 | orchestrator | 2026-03-07 01:06:39 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:06:39.974162 | orchestrator | 2026-03-07 01:06:39 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:06:39.974202 | orchestrator | 2026-03-07 01:06:39 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:06:43.019439 | orchestrator | 2026-03-07 01:06:43 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:06:43.021102 | orchestrator | 2026-03-07 01:06:43 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:06:43.023440 | orchestrator | 2026-03-07 01:06:43 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state STARTED 2026-03-07 01:06:43.024872 | orchestrator | 2026-03-07 01:06:43 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:06:43.024905 | orchestrator | 2026-03-07 01:06:43 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:06:46.071441 | orchestrator | 2026-03-07 01:06:46 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:06:46.072928 | orchestrator | 2026-03-07 01:06:46 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:06:46.081892 | orchestrator | 2026-03-07 01:06:46 | INFO  | Task a0b37df1-4e59-46e1-8f5e-a35127a55ab0 is in state SUCCESS 2026-03-07 01:06:46.096218 | orchestrator | 2026-03-07 01:06:46.096279 | orchestrator | 2026-03-07 01:06:46.096287 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-07 01:06:46.096296 | orchestrator | 2026-03-07 01:06:46.096304 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-07 01:06:46.096343 | orchestrator | Saturday 07 March 2026 01:03:28 +0000 (0:00:00.365) 0:00:00.366 ******** 2026-03-07 01:06:46.096352 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:06:46.096360 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:06:46.096368 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:06:46.096375 | orchestrator | 2026-03-07 01:06:46.096382 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-07 01:06:46.096391 | orchestrator | Saturday 07 March 2026 01:03:29 +0000 (0:00:00.329) 0:00:00.695 ******** 2026-03-07 01:06:46.096398 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-03-07 01:06:46.096406 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-03-07 01:06:46.096414 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-03-07 01:06:46.096421 | orchestrator | 2026-03-07 01:06:46.096428 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-03-07 01:06:46.096436 | orchestrator | 2026-03-07 01:06:46.096443 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-07 01:06:46.096451 | orchestrator | Saturday 07 March 2026 01:03:29 +0000 (0:00:00.486) 0:00:01.182 ******** 2026-03-07 01:06:46.096458 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:06:46.096466 | orchestrator | 2026-03-07 01:06:46.096473 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-03-07 01:06:46.096481 | orchestrator | Saturday 07 March 2026 01:03:30 +0000 (0:00:00.527) 0:00:01.710 ******** 2026-03-07 01:06:46.096521 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-03-07 01:06:46.096530 | orchestrator | 2026-03-07 01:06:46.096537 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-03-07 01:06:46.096545 | orchestrator | Saturday 07 March 2026 01:03:33 +0000 (0:00:03.626) 0:00:05.336 ******** 2026-03-07 01:06:46.096552 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-03-07 01:06:46.096559 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-03-07 01:06:46.096567 | orchestrator | 2026-03-07 01:06:46.096574 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-03-07 01:06:46.096581 | orchestrator | Saturday 07 March 2026 01:03:40 +0000 (0:00:06.770) 0:00:12.107 ******** 2026-03-07 01:06:46.096589 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-07 01:06:46.096596 | orchestrator | 2026-03-07 01:06:46.096603 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-03-07 01:06:46.096611 | orchestrator | Saturday 07 March 2026 01:03:44 +0000 (0:00:03.542) 0:00:15.649 ******** 2026-03-07 01:06:46.096618 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-07 01:06:46.096634 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-03-07 01:06:46.096641 | orchestrator | 2026-03-07 01:06:46.096649 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-03-07 01:06:46.096655 | orchestrator | Saturday 07 March 2026 01:03:48 +0000 (0:00:04.403) 0:00:20.052 ******** 2026-03-07 01:06:46.096661 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-07 01:06:46.096668 | orchestrator | 2026-03-07 01:06:46.096675 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-03-07 01:06:46.096711 | orchestrator | Saturday 07 March 2026 01:03:53 +0000 (0:00:04.544) 0:00:24.597 ******** 2026-03-07 01:06:46.096720 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-03-07 01:06:46.096727 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-03-07 01:06:46.096734 | orchestrator | 2026-03-07 01:06:46.096741 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-03-07 01:06:46.096748 | orchestrator | Saturday 07 March 2026 01:04:01 +0000 (0:00:08.712) 0:00:33.310 ******** 2026-03-07 01:06:46.096757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-07 01:06:46.096785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-07 01:06:46.096799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-07 01:06:46.096807 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-07 01:06:46.096815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-07 01:06:46.096823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-07 01:06:46.096831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-07 01:06:46.096846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-07 01:06:46.096859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-07 01:06:46.096867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-07 01:06:46.096875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-07 01:06:46.096883 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-07 01:06:46.096891 | orchestrator | 2026-03-07 01:06:46.096898 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-07 01:06:46.096906 | orchestrator | Saturday 07 March 2026 01:04:04 +0000 (0:00:02.965) 0:00:36.276 ******** 2026-03-07 01:06:46.096913 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:06:46.096920 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:06:46.096928 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:06:46.096935 | orchestrator | 2026-03-07 01:06:46.096971 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-07 01:06:46.096984 | orchestrator | Saturday 07 March 2026 01:04:05 +0000 (0:00:00.468) 0:00:36.744 ******** 2026-03-07 01:06:46.096991 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:06:46.096999 | orchestrator | 2026-03-07 01:06:46.097010 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-03-07 01:06:46.097017 | orchestrator | Saturday 07 March 2026 01:04:06 +0000 (0:00:01.140) 0:00:37.885 ******** 2026-03-07 01:06:46.097028 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-03-07 01:06:46.097036 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-03-07 01:06:46.097043 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-03-07 01:06:46.097051 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-03-07 01:06:46.097065 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-03-07 01:06:46.097072 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-03-07 01:06:46.097080 | orchestrator | 2026-03-07 01:06:46.097116 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-03-07 01:06:46.097124 | orchestrator | Saturday 07 March 2026 01:04:08 +0000 (0:00:02.370) 0:00:40.256 ******** 2026-03-07 01:06:46.097132 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-07 01:06:46.097140 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-07 01:06:46.097183 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-07 01:06:46.097191 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-07 01:06:46.097215 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-07 01:06:46.097224 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-07 01:06:46.097232 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-07 01:06:46.097240 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-07 01:06:46.097248 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-07 01:06:46.097266 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-07 01:06:46.097274 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-07 01:06:46.097281 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-07 01:06:46.097289 | orchestrator | 2026-03-07 01:06:46.097296 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-03-07 01:06:46.097303 | orchestrator | Saturday 07 March 2026 01:04:12 +0000 (0:00:03.940) 0:00:44.196 ******** 2026-03-07 01:06:46.097311 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-07 01:06:46.097318 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-07 01:06:46.097325 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-07 01:06:46.097333 | orchestrator | 2026-03-07 01:06:46.097340 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-03-07 01:06:46.097347 | orchestrator | Saturday 07 March 2026 01:04:15 +0000 (0:00:03.021) 0:00:47.217 ******** 2026-03-07 01:06:46.097354 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-03-07 01:06:46.097361 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-03-07 01:06:46.097372 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-03-07 01:06:46.097386 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-03-07 01:06:46.097393 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-03-07 01:06:46.097400 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-03-07 01:06:46.097408 | orchestrator | 2026-03-07 01:06:46.097415 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-03-07 01:06:46.097422 | orchestrator | Saturday 07 March 2026 01:04:19 +0000 (0:00:03.882) 0:00:51.100 ******** 2026-03-07 01:06:46.097429 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-03-07 01:06:46.097437 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-03-07 01:06:46.097444 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-03-07 01:06:46.097451 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-03-07 01:06:46.097458 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-03-07 01:06:46.097465 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-03-07 01:06:46.097472 | orchestrator | 2026-03-07 01:06:46.097479 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-03-07 01:06:46.097487 | orchestrator | Saturday 07 March 2026 01:04:21 +0000 (0:00:01.769) 0:00:52.870 ******** 2026-03-07 01:06:46.097494 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:06:46.097501 | orchestrator | 2026-03-07 01:06:46.097509 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-03-07 01:06:46.097515 | orchestrator | Saturday 07 March 2026 01:04:21 +0000 (0:00:00.376) 0:00:53.246 ******** 2026-03-07 01:06:46.097523 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:06:46.097530 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:06:46.097541 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:06:46.097548 | orchestrator | 2026-03-07 01:06:46.097555 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-07 01:06:46.097562 | orchestrator | Saturday 07 March 2026 01:04:22 +0000 (0:00:00.893) 0:00:54.140 ******** 2026-03-07 01:06:46.097573 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:06:46.097580 | orchestrator | 2026-03-07 01:06:46.097587 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-03-07 01:06:46.097594 | orchestrator | Saturday 07 March 2026 01:04:24 +0000 (0:00:01.825) 0:00:55.965 ******** 2026-03-07 01:06:46.097602 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-07 01:06:46.097610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-07 01:06:46.097622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-07 01:06:46.097630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-07 01:06:46.097644 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-07 01:06:46.097652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-07 01:06:46.097658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-07 01:06:46.097665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-07 01:06:46.097676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-07 01:06:46.097684 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-07 01:06:46.097962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-07 01:06:46.097975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-07 01:06:46.097982 | orchestrator | 2026-03-07 01:06:46.097990 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-03-07 01:06:46.097997 | orchestrator | Saturday 07 March 2026 01:04:29 +0000 (0:00:05.121) 0:01:01.087 ******** 2026-03-07 01:06:46.098005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-07 01:06:46.098049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 01:06:46.098059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-07 01:06:46.098071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-07 01:06:46.098078 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:06:46.098101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-07 01:06:46.098110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 01:06:46.098121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-07 01:06:46.098129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-07 01:06:46.098137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-07 01:06:46.098144 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:06:46.098157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 01:06:46.098165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-07 01:06:46.098176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-07 01:06:46.098183 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:06:46.098191 | orchestrator | 2026-03-07 01:06:46.098198 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-03-07 01:06:46.098205 | orchestrator | Saturday 07 March 2026 01:04:30 +0000 (0:00:00.932) 0:01:02.020 ******** 2026-03-07 01:06:46.098212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-07 01:06:46.098219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 01:06:46.098233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-07 01:06:46.098241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-07 01:06:46.098255 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:06:46.098262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-07 01:06:46.098270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 01:06:46.098277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-07 01:06:46.098285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-07 01:06:46.098295 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:06:46.098305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-07 01:06:46.098320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 01:06:46.098328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-07 01:06:46.098335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-07 01:06:46.098343 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:06:46.098350 | orchestrator | 2026-03-07 01:06:46.098357 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-03-07 01:06:46.098365 | orchestrator | Saturday 07 March 2026 01:04:32 +0000 (0:00:01.922) 0:01:03.943 ******** 2026-03-07 01:06:46.098372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-07 01:06:46.098386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-07 01:06:46.098398 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-07 01:06:46.098405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-07 01:06:46.098413 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-07 01:06:46.098421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-07 01:06:46.098435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-07 01:06:46.098447 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-07 01:06:46.098454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-07 01:06:46.098462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-07 01:06:46.098470 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-07 01:06:46.098477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-07 01:06:46.098485 | orchestrator | 2026-03-07 01:06:46.098492 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-03-07 01:06:46.098499 | orchestrator | Saturday 07 March 2026 01:04:37 +0000 (0:00:05.108) 0:01:09.052 ******** 2026-03-07 01:06:46.098506 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-07 01:06:46.098521 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-07 01:06:46.098528 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-07 01:06:46.098535 | orchestrator | 2026-03-07 01:06:46.098546 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-03-07 01:06:46.098553 | orchestrator | Saturday 07 March 2026 01:04:39 +0000 (0:00:01.995) 0:01:11.047 ******** 2026-03-07 01:06:46.098560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-07 01:06:46.098568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-07 01:06:46.098577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-07 01:06:46.098586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-07 01:06:46.098599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-07 01:06:46.098622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-07 01:06:46.098631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-07 01:06:46.098641 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-07 01:06:46.098649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-07 01:06:46.098656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-07 01:06:46.098673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-07 01:06:46.098683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-07 01:06:46.098692 | orchestrator | 2026-03-07 01:06:46.098700 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-03-07 01:06:46.098709 | orchestrator | Saturday 07 March 2026 01:04:57 +0000 (0:00:18.324) 0:01:29.372 ******** 2026-03-07 01:06:46.098717 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:06:46.098726 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:06:46.098734 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:06:46.098742 | orchestrator | 2026-03-07 01:06:46.098750 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-03-07 01:06:46.098759 | orchestrator | Saturday 07 March 2026 01:05:00 +0000 (0:00:02.657) 0:01:32.029 ******** 2026-03-07 01:06:46.098767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-07 01:06:46.098776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 01:06:46.098785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-07 01:06:46.098801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-07 01:06:46.098810 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:06:46.098819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-07 01:06:46.098828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 01:06:46.098837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-07 01:06:46.098846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-07 01:06:46.098859 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:06:46.098895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-07 01:06:46.098908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 01:06:46.098917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-07 01:06:46.098926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-07 01:06:46.098935 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:06:46.098942 | orchestrator | 2026-03-07 01:06:46.098949 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-03-07 01:06:46.098956 | orchestrator | Saturday 07 March 2026 01:05:01 +0000 (0:00:01.295) 0:01:33.324 ******** 2026-03-07 01:06:46.098963 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:06:46.098970 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:06:46.098988 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:06:46.099001 | orchestrator | 2026-03-07 01:06:46.099008 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-03-07 01:06:46.099015 | orchestrator | Saturday 07 March 2026 01:05:02 +0000 (0:00:00.580) 0:01:33.905 ******** 2026-03-07 01:06:46.099026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-07 01:06:46.099041 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-07 01:06:46.099049 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-07 01:06:46.099057 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-07 01:06:46.099065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-07 01:06:46.099076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-07 01:06:46.099097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-07 01:06:46.099112 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-07 01:06:46.099120 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-07 01:06:46.099127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-07 01:06:46.099135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-07 01:06:46.099146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-07 01:06:46.099153 | orchestrator | 2026-03-07 01:06:46.099160 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-07 01:06:46.099167 | orchestrator | Saturday 07 March 2026 01:05:06 +0000 (0:00:04.244) 0:01:38.150 ******** 2026-03-07 01:06:46.099174 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:06:46.099181 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:06:46.099188 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:06:46.099195 | orchestrator | 2026-03-07 01:06:46.099202 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-03-07 01:06:46.099209 | orchestrator | Saturday 07 March 2026 01:05:07 +0000 (0:00:01.247) 0:01:39.397 ******** 2026-03-07 01:06:46.099216 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:06:46.099223 | orchestrator | 2026-03-07 01:06:46.099230 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-03-07 01:06:46.099237 | orchestrator | Saturday 07 March 2026 01:05:10 +0000 (0:00:02.411) 0:01:41.808 ******** 2026-03-07 01:06:46.099244 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:06:46.099252 | orchestrator | 2026-03-07 01:06:46.099259 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-03-07 01:06:46.099270 | orchestrator | Saturday 07 March 2026 01:05:12 +0000 (0:00:02.187) 0:01:43.996 ******** 2026-03-07 01:06:46.099277 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:06:46.099284 | orchestrator | 2026-03-07 01:06:46.099291 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-07 01:06:46.099300 | orchestrator | Saturday 07 March 2026 01:05:32 +0000 (0:00:20.070) 0:02:04.067 ******** 2026-03-07 01:06:46.099308 | orchestrator | 2026-03-07 01:06:46.099315 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-07 01:06:46.099322 | orchestrator | Saturday 07 March 2026 01:05:32 +0000 (0:00:00.076) 0:02:04.144 ******** 2026-03-07 01:06:46.099329 | orchestrator | 2026-03-07 01:06:46.099336 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-07 01:06:46.099343 | orchestrator | Saturday 07 March 2026 01:05:32 +0000 (0:00:00.067) 0:02:04.212 ******** 2026-03-07 01:06:46.099350 | orchestrator | 2026-03-07 01:06:46.099357 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-03-07 01:06:46.099364 | orchestrator | Saturday 07 March 2026 01:05:32 +0000 (0:00:00.068) 0:02:04.280 ******** 2026-03-07 01:06:46.099371 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:06:46.099378 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:06:46.099385 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:06:46.099392 | orchestrator | 2026-03-07 01:06:46.099399 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-03-07 01:06:46.099406 | orchestrator | Saturday 07 March 2026 01:05:55 +0000 (0:00:22.654) 0:02:26.935 ******** 2026-03-07 01:06:46.099413 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:06:46.099420 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:06:46.099427 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:06:46.099439 | orchestrator | 2026-03-07 01:06:46.099446 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-03-07 01:06:46.099453 | orchestrator | Saturday 07 March 2026 01:06:01 +0000 (0:00:05.734) 0:02:32.669 ******** 2026-03-07 01:06:46.099460 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:06:46.099467 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:06:46.099475 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:06:46.099481 | orchestrator | 2026-03-07 01:06:46.099489 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-03-07 01:06:46.099496 | orchestrator | Saturday 07 March 2026 01:06:26 +0000 (0:00:25.275) 0:02:57.944 ******** 2026-03-07 01:06:46.099503 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:06:46.099510 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:06:46.099517 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:06:46.099525 | orchestrator | 2026-03-07 01:06:46.099532 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-03-07 01:06:46.099539 | orchestrator | Saturday 07 March 2026 01:06:43 +0000 (0:00:17.245) 0:03:15.189 ******** 2026-03-07 01:06:46.099545 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:06:46.099552 | orchestrator | 2026-03-07 01:06:46.099559 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 01:06:46.099566 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-07 01:06:46.099574 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-07 01:06:46.099581 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-07 01:06:46.099588 | orchestrator | 2026-03-07 01:06:46.099595 | orchestrator | 2026-03-07 01:06:46.099603 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 01:06:46.099610 | orchestrator | Saturday 07 March 2026 01:06:43 +0000 (0:00:00.240) 0:03:15.430 ******** 2026-03-07 01:06:46.099617 | orchestrator | =============================================================================== 2026-03-07 01:06:46.099624 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 25.28s 2026-03-07 01:06:46.099631 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 22.65s 2026-03-07 01:06:46.099638 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 20.07s 2026-03-07 01:06:46.099645 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 18.32s 2026-03-07 01:06:46.099652 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 17.25s 2026-03-07 01:06:46.099658 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 8.71s 2026-03-07 01:06:46.099664 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.77s 2026-03-07 01:06:46.099670 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 5.73s 2026-03-07 01:06:46.099676 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 5.12s 2026-03-07 01:06:46.099682 | orchestrator | cinder : Copying over config.json files for services -------------------- 5.11s 2026-03-07 01:06:46.099689 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 4.54s 2026-03-07 01:06:46.099696 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.40s 2026-03-07 01:06:46.099703 | orchestrator | cinder : Check cinder containers ---------------------------------------- 4.24s 2026-03-07 01:06:46.099710 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.94s 2026-03-07 01:06:46.099717 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.88s 2026-03-07 01:06:46.099724 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.63s 2026-03-07 01:06:46.099731 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.54s 2026-03-07 01:06:46.099746 | orchestrator | cinder : Copy over Ceph keyring files for cinder-volume ----------------- 3.02s 2026-03-07 01:06:46.099754 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.97s 2026-03-07 01:06:46.099764 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 2.66s 2026-03-07 01:06:46.099771 | orchestrator | 2026-03-07 01:06:46 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:06:46.099779 | orchestrator | 2026-03-07 01:06:46 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:06:46.099786 | orchestrator | 2026-03-07 01:06:46 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:06:49.156308 | orchestrator | 2026-03-07 01:06:49 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:06:49.157013 | orchestrator | 2026-03-07 01:06:49 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:06:49.159803 | orchestrator | 2026-03-07 01:06:49 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:06:49.160988 | orchestrator | 2026-03-07 01:06:49 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:06:49.161348 | orchestrator | 2026-03-07 01:06:49 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:06:52.191197 | orchestrator | 2026-03-07 01:06:52 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:06:52.191709 | orchestrator | 2026-03-07 01:06:52 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:06:52.192979 | orchestrator | 2026-03-07 01:06:52 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state STARTED 2026-03-07 01:06:52.193998 | orchestrator | 2026-03-07 01:06:52 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:06:52.194067 | orchestrator | 2026-03-07 01:06:52 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:06:55.248358 | orchestrator | 2026-03-07 01:06:55 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:06:55.250637 | orchestrator | 2026-03-07 01:06:55 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:06:55.252026 | orchestrator | 2026-03-07 01:06:55 | INFO  | Task 6c1e3c12-a91a-4c8d-bebb-e372f44704e8 is in state SUCCESS 2026-03-07 01:06:55.253867 | orchestrator | 2026-03-07 01:06:55.253895 | orchestrator | 2026-03-07 01:06:55.253900 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-07 01:06:55.253905 | orchestrator | 2026-03-07 01:06:55.253910 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-07 01:06:55.253918 | orchestrator | Saturday 07 March 2026 01:03:22 +0000 (0:00:00.444) 0:00:00.444 ******** 2026-03-07 01:06:55.253925 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:06:55.253933 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:06:55.253940 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:06:55.253946 | orchestrator | 2026-03-07 01:06:55.253950 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-07 01:06:55.253954 | orchestrator | Saturday 07 March 2026 01:03:23 +0000 (0:00:00.841) 0:00:01.286 ******** 2026-03-07 01:06:55.253958 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-03-07 01:06:55.253963 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-03-07 01:06:55.253967 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-03-07 01:06:55.253971 | orchestrator | 2026-03-07 01:06:55.253975 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-03-07 01:06:55.253979 | orchestrator | 2026-03-07 01:06:55.253983 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-07 01:06:55.254000 | orchestrator | Saturday 07 March 2026 01:03:25 +0000 (0:00:01.719) 0:00:03.006 ******** 2026-03-07 01:06:55.254005 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:06:55.254009 | orchestrator | 2026-03-07 01:06:55.254035 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-03-07 01:06:55.254042 | orchestrator | Saturday 07 March 2026 01:03:26 +0000 (0:00:01.112) 0:00:04.119 ******** 2026-03-07 01:06:55.254049 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-03-07 01:06:55.254056 | orchestrator | 2026-03-07 01:06:55.254063 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-03-07 01:06:55.254070 | orchestrator | Saturday 07 March 2026 01:03:30 +0000 (0:00:04.420) 0:00:08.539 ******** 2026-03-07 01:06:55.254077 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-03-07 01:06:55.254082 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-03-07 01:06:55.254117 | orchestrator | 2026-03-07 01:06:55.254126 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-03-07 01:06:55.254133 | orchestrator | Saturday 07 March 2026 01:03:38 +0000 (0:00:07.216) 0:00:15.755 ******** 2026-03-07 01:06:55.254139 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-03-07 01:06:55.254146 | orchestrator | 2026-03-07 01:06:55.254152 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-03-07 01:06:55.254159 | orchestrator | Saturday 07 March 2026 01:03:41 +0000 (0:00:03.652) 0:00:19.408 ******** 2026-03-07 01:06:55.254165 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-07 01:06:55.254172 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-03-07 01:06:55.254179 | orchestrator | 2026-03-07 01:06:55.254186 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-03-07 01:06:55.254202 | orchestrator | Saturday 07 March 2026 01:03:46 +0000 (0:00:04.433) 0:00:23.842 ******** 2026-03-07 01:06:55.254210 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-07 01:06:55.254217 | orchestrator | 2026-03-07 01:06:55.254223 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-03-07 01:06:55.254230 | orchestrator | Saturday 07 March 2026 01:03:51 +0000 (0:00:05.199) 0:00:29.042 ******** 2026-03-07 01:06:55.254236 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-03-07 01:06:55.254243 | orchestrator | 2026-03-07 01:06:55.254249 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-03-07 01:06:55.254255 | orchestrator | Saturday 07 March 2026 01:03:55 +0000 (0:00:04.367) 0:00:33.409 ******** 2026-03-07 01:06:55.254277 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-07 01:06:55.254293 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-07 01:06:55.254304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-07 01:06:55.254312 | orchestrator | 2026-03-07 01:06:55.254318 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-07 01:06:55.254325 | orchestrator | Saturday 07 March 2026 01:04:01 +0000 (0:00:05.372) 0:00:38.782 ******** 2026-03-07 01:06:55.254346 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:06:55.254353 | orchestrator | 2026-03-07 01:06:55.254371 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-03-07 01:06:55.254397 | orchestrator | Saturday 07 March 2026 01:04:01 +0000 (0:00:00.677) 0:00:39.459 ******** 2026-03-07 01:06:55.254408 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:06:55.254415 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:06:55.254422 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:06:55.254429 | orchestrator | 2026-03-07 01:06:55.254433 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-03-07 01:06:55.254437 | orchestrator | Saturday 07 March 2026 01:04:07 +0000 (0:00:05.282) 0:00:44.742 ******** 2026-03-07 01:06:55.254441 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-07 01:06:55.254445 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-07 01:06:55.254449 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-07 01:06:55.254453 | orchestrator | 2026-03-07 01:06:55.254457 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-03-07 01:06:55.254462 | orchestrator | Saturday 07 March 2026 01:04:09 +0000 (0:00:02.014) 0:00:46.756 ******** 2026-03-07 01:06:55.254468 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-07 01:06:55.254474 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-07 01:06:55.254481 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-07 01:06:55.254488 | orchestrator | 2026-03-07 01:06:55.254494 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-03-07 01:06:55.254501 | orchestrator | Saturday 07 March 2026 01:04:10 +0000 (0:00:01.779) 0:00:48.536 ******** 2026-03-07 01:06:55.254508 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:06:55.254514 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:06:55.254521 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:06:55.254528 | orchestrator | 2026-03-07 01:06:55.254535 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-03-07 01:06:55.254542 | orchestrator | Saturday 07 March 2026 01:04:12 +0000 (0:00:01.063) 0:00:49.599 ******** 2026-03-07 01:06:55.254550 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:06:55.254556 | orchestrator | 2026-03-07 01:06:55.254561 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-03-07 01:06:55.254566 | orchestrator | Saturday 07 March 2026 01:04:12 +0000 (0:00:00.141) 0:00:49.741 ******** 2026-03-07 01:06:55.254570 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:06:55.254575 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:06:55.254580 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:06:55.254585 | orchestrator | 2026-03-07 01:06:55.254589 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-07 01:06:55.254594 | orchestrator | Saturday 07 March 2026 01:04:12 +0000 (0:00:00.345) 0:00:50.086 ******** 2026-03-07 01:06:55.254602 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:06:55.254606 | orchestrator | 2026-03-07 01:06:55.254611 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-03-07 01:06:55.254616 | orchestrator | Saturday 07 March 2026 01:04:13 +0000 (0:00:00.770) 0:00:50.857 ******** 2026-03-07 01:06:55.254626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-07 01:06:55.254636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-07 01:06:55.254645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-07 01:06:55.254652 | orchestrator | 2026-03-07 01:06:55.254657 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-03-07 01:06:55.254662 | orchestrator | Saturday 07 March 2026 01:04:19 +0000 (0:00:06.593) 0:00:57.451 ******** 2026-03-07 01:06:55.254671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-07 01:06:55.254676 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:06:55.254684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-07 01:06:55.254692 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:06:55.254703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-07 01:06:55.254708 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:06:55.254713 | orchestrator | 2026-03-07 01:06:55.254719 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-03-07 01:06:55.254726 | orchestrator | Saturday 07 March 2026 01:04:25 +0000 (0:00:06.112) 0:01:03.564 ******** 2026-03-07 01:06:55.254737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-07 01:06:55.254749 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:06:55.254760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-07 01:06:55.254767 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:06:55.254774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-07 01:06:55.254781 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:06:55.254787 | orchestrator | 2026-03-07 01:06:55.254793 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-03-07 01:06:55.254799 | orchestrator | Saturday 07 March 2026 01:04:30 +0000 (0:00:04.663) 0:01:08.227 ******** 2026-03-07 01:06:55.254809 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:06:55.254816 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:06:55.254822 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:06:55.254828 | orchestrator | 2026-03-07 01:06:55.254835 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-03-07 01:06:55.254849 | orchestrator | Saturday 07 March 2026 01:04:34 +0000 (0:00:04.349) 0:01:12.576 ******** 2026-03-07 01:06:55.254857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-07 01:06:55.254869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-07 01:06:55.254878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-07 01:06:55.254886 | orchestrator | 2026-03-07 01:06:55.254892 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-03-07 01:06:55.254898 | orchestrator | Saturday 07 March 2026 01:04:39 +0000 (0:00:04.586) 0:01:17.163 ******** 2026-03-07 01:06:55.254905 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:06:55.254912 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:06:55.254918 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:06:55.254924 | orchestrator | 2026-03-07 01:06:55.254928 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-03-07 01:06:55.254932 | orchestrator | Saturday 07 March 2026 01:04:49 +0000 (0:00:09.518) 0:01:26.682 ******** 2026-03-07 01:06:55.254938 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:06:55.254944 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:06:55.254950 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:06:55.254957 | orchestrator | 2026-03-07 01:06:55.254964 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-03-07 01:06:55.254970 | orchestrator | Saturday 07 March 2026 01:04:55 +0000 (0:00:06.081) 0:01:32.763 ******** 2026-03-07 01:06:55.254977 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:06:55.255076 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:06:55.255100 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:06:55.255108 | orchestrator | 2026-03-07 01:06:55.255115 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-03-07 01:06:55.255121 | orchestrator | Saturday 07 March 2026 01:04:59 +0000 (0:00:04.659) 0:01:37.422 ******** 2026-03-07 01:06:55.255128 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:06:55.255134 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:06:55.255140 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:06:55.255147 | orchestrator | 2026-03-07 01:06:55.255154 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-03-07 01:06:55.255161 | orchestrator | Saturday 07 March 2026 01:05:04 +0000 (0:00:04.557) 0:01:41.980 ******** 2026-03-07 01:06:55.255168 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:06:55.255172 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:06:55.255176 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:06:55.255180 | orchestrator | 2026-03-07 01:06:55.255184 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-03-07 01:06:55.255188 | orchestrator | Saturday 07 March 2026 01:05:09 +0000 (0:00:05.022) 0:01:47.002 ******** 2026-03-07 01:06:55.255196 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:06:55.255200 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:06:55.255204 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:06:55.255208 | orchestrator | 2026-03-07 01:06:55.255212 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-03-07 01:06:55.255216 | orchestrator | Saturday 07 March 2026 01:05:09 +0000 (0:00:00.324) 0:01:47.327 ******** 2026-03-07 01:06:55.255220 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-07 01:06:55.255224 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:06:55.255229 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-07 01:06:55.255233 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:06:55.255236 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-07 01:06:55.255240 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:06:55.255244 | orchestrator | 2026-03-07 01:06:55.255248 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-03-07 01:06:55.255252 | orchestrator | Saturday 07 March 2026 01:05:13 +0000 (0:00:03.886) 0:01:51.213 ******** 2026-03-07 01:06:55.255256 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:06:55.255260 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:06:55.255264 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:06:55.255268 | orchestrator | 2026-03-07 01:06:55.255272 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-03-07 01:06:55.255275 | orchestrator | Saturday 07 March 2026 01:05:18 +0000 (0:00:05.211) 0:01:56.425 ******** 2026-03-07 01:06:55.255283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-07 01:06:55.255292 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-07 01:06:55.255302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-07 01:06:55.255306 | orchestrator | 2026-03-07 01:06:55.255310 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-07 01:06:55.255314 | orchestrator | Saturday 07 March 2026 01:05:23 +0000 (0:00:04.411) 0:02:00.837 ******** 2026-03-07 01:06:55.255318 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:06:55.255322 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:06:55.255326 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:06:55.255330 | orchestrator | 2026-03-07 01:06:55.255334 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-03-07 01:06:55.255339 | orchestrator | Saturday 07 March 2026 01:05:23 +0000 (0:00:00.346) 0:02:01.184 ******** 2026-03-07 01:06:55.255346 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:06:55.255353 | orchestrator | 2026-03-07 01:06:55.255360 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-03-07 01:06:55.255366 | orchestrator | Saturday 07 March 2026 01:05:25 +0000 (0:00:02.092) 0:02:03.276 ******** 2026-03-07 01:06:55.255373 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:06:55.255377 | orchestrator | 2026-03-07 01:06:55.255381 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-03-07 01:06:55.255388 | orchestrator | Saturday 07 March 2026 01:05:27 +0000 (0:00:02.310) 0:02:05.586 ******** 2026-03-07 01:06:55.255392 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:06:55.255396 | orchestrator | 2026-03-07 01:06:55.255400 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-03-07 01:06:55.255404 | orchestrator | Saturday 07 March 2026 01:05:30 +0000 (0:00:02.391) 0:02:07.978 ******** 2026-03-07 01:06:55.255408 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:06:55.255412 | orchestrator | 2026-03-07 01:06:55.255416 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-03-07 01:06:55.255422 | orchestrator | Saturday 07 March 2026 01:06:01 +0000 (0:00:31.183) 0:02:39.162 ******** 2026-03-07 01:06:55.255426 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:06:55.255430 | orchestrator | 2026-03-07 01:06:55.255434 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-07 01:06:55.255438 | orchestrator | Saturday 07 March 2026 01:06:04 +0000 (0:00:03.130) 0:02:42.293 ******** 2026-03-07 01:06:55.255442 | orchestrator | 2026-03-07 01:06:55.255446 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-07 01:06:55.255450 | orchestrator | Saturday 07 March 2026 01:06:04 +0000 (0:00:00.086) 0:02:42.379 ******** 2026-03-07 01:06:55.255454 | orchestrator | 2026-03-07 01:06:55.255458 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-07 01:06:55.255462 | orchestrator | Saturday 07 March 2026 01:06:04 +0000 (0:00:00.072) 0:02:42.452 ******** 2026-03-07 01:06:55.255466 | orchestrator | 2026-03-07 01:06:55.255473 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-03-07 01:06:55.255479 | orchestrator | Saturday 07 March 2026 01:06:04 +0000 (0:00:00.069) 0:02:42.522 ******** 2026-03-07 01:06:55.255485 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:06:55.255492 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:06:55.255499 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:06:55.255506 | orchestrator | 2026-03-07 01:06:55.255520 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 01:06:55.255525 | orchestrator | testbed-node-0 : ok=27  changed=20  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-07 01:06:55.255530 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-07 01:06:55.255534 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-07 01:06:55.255538 | orchestrator | 2026-03-07 01:06:55.255542 | orchestrator | 2026-03-07 01:06:55.255546 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 01:06:55.255550 | orchestrator | Saturday 07 March 2026 01:06:51 +0000 (0:00:46.869) 0:03:29.391 ******** 2026-03-07 01:06:55.255554 | orchestrator | =============================================================================== 2026-03-07 01:06:55.255558 | orchestrator | glance : Restart glance-api container ---------------------------------- 46.87s 2026-03-07 01:06:55.255562 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 31.18s 2026-03-07 01:06:55.255566 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 9.52s 2026-03-07 01:06:55.255570 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 7.22s 2026-03-07 01:06:55.255574 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 6.59s 2026-03-07 01:06:55.255578 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 6.11s 2026-03-07 01:06:55.255582 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 6.08s 2026-03-07 01:06:55.255588 | orchestrator | glance : Ensuring config directories exist ------------------------------ 5.37s 2026-03-07 01:06:55.255592 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 5.28s 2026-03-07 01:06:55.255599 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 5.21s 2026-03-07 01:06:55.255603 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 5.20s 2026-03-07 01:06:55.255607 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 5.02s 2026-03-07 01:06:55.255611 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 4.66s 2026-03-07 01:06:55.255615 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 4.66s 2026-03-07 01:06:55.255619 | orchestrator | glance : Copying over config.json files for services -------------------- 4.59s 2026-03-07 01:06:55.255623 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 4.56s 2026-03-07 01:06:55.255627 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.43s 2026-03-07 01:06:55.255631 | orchestrator | service-ks-register : glance | Creating services ------------------------ 4.42s 2026-03-07 01:06:55.255635 | orchestrator | glance : Check glance containers ---------------------------------------- 4.41s 2026-03-07 01:06:55.255639 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.37s 2026-03-07 01:06:55.255643 | orchestrator | 2026-03-07 01:06:55 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:06:55.256297 | orchestrator | 2026-03-07 01:06:55 | INFO  | Task 09a13776-4083-42a2-97d8-4b444a9c84ff is in state STARTED 2026-03-07 01:06:55.256325 | orchestrator | 2026-03-07 01:06:55 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:06:58.292769 | orchestrator | 2026-03-07 01:06:58 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:06:58.297442 | orchestrator | 2026-03-07 01:06:58 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:06:58.298936 | orchestrator | 2026-03-07 01:06:58 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:06:58.300428 | orchestrator | 2026-03-07 01:06:58 | INFO  | Task 09a13776-4083-42a2-97d8-4b444a9c84ff is in state STARTED 2026-03-07 01:06:58.300477 | orchestrator | 2026-03-07 01:06:58 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:07:01.354232 | orchestrator | 2026-03-07 01:07:01 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:07:01.357470 | orchestrator | 2026-03-07 01:07:01 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state STARTED 2026-03-07 01:07:01.359999 | orchestrator | 2026-03-07 01:07:01 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:07:01.362121 | orchestrator | 2026-03-07 01:07:01 | INFO  | Task 09a13776-4083-42a2-97d8-4b444a9c84ff is in state STARTED 2026-03-07 01:07:01.362180 | orchestrator | 2026-03-07 01:07:01 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:07:04.425279 | orchestrator | 2026-03-07 01:07:04 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:07:04.430305 | orchestrator | 2026-03-07 01:07:04 | INFO  | Task bde0b076-c270-494e-b8e5-d6890cdadfce is in state SUCCESS 2026-03-07 01:07:04.432447 | orchestrator | 2026-03-07 01:07:04.432512 | orchestrator | 2026-03-07 01:07:04.432521 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-07 01:07:04.432529 | orchestrator | 2026-03-07 01:07:04.432865 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-07 01:07:04.432877 | orchestrator | Saturday 07 March 2026 01:03:11 +0000 (0:00:00.369) 0:00:00.369 ******** 2026-03-07 01:07:04.432886 | orchestrator | ok: [testbed-manager] 2026-03-07 01:07:04.432896 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:07:04.432905 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:07:04.432914 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:07:04.432922 | orchestrator | ok: [testbed-node-3] 2026-03-07 01:07:04.432955 | orchestrator | ok: [testbed-node-4] 2026-03-07 01:07:04.432961 | orchestrator | ok: [testbed-node-5] 2026-03-07 01:07:04.432966 | orchestrator | 2026-03-07 01:07:04.432972 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-07 01:07:04.432977 | orchestrator | Saturday 07 March 2026 01:03:12 +0000 (0:00:00.928) 0:00:01.298 ******** 2026-03-07 01:07:04.432983 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-03-07 01:07:04.432989 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-03-07 01:07:04.432994 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-03-07 01:07:04.432999 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-03-07 01:07:04.433004 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-03-07 01:07:04.433009 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-03-07 01:07:04.433014 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-03-07 01:07:04.433019 | orchestrator | 2026-03-07 01:07:04.433025 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-03-07 01:07:04.433030 | orchestrator | 2026-03-07 01:07:04.433035 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-07 01:07:04.433040 | orchestrator | Saturday 07 March 2026 01:03:13 +0000 (0:00:00.904) 0:00:02.202 ******** 2026-03-07 01:07:04.433057 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 01:07:04.433064 | orchestrator | 2026-03-07 01:07:04.433069 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-03-07 01:07:04.433075 | orchestrator | Saturday 07 March 2026 01:03:15 +0000 (0:00:02.209) 0:00:04.411 ******** 2026-03-07 01:07:04.433082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-07 01:07:04.433112 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-07 01:07:04.433204 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-07 01:07:04.433244 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-07 01:07:04.433271 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-07 01:07:04.433277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:04.433345 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-07 01:07:04.433352 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:04.433357 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-07 01:07:04.433364 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-07 01:07:04.433370 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-07 01:07:04.433386 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-07 01:07:04.433392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:04.433398 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:04.433408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:04.433413 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-07 01:07:04.433419 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-07 01:07:04.433424 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-07 01:07:04.433429 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-07 01:07:04.433444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-07 01:07:04.433450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-07 01:07:04.433458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:04.433464 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-07 01:07:04.433473 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-07 01:07:04.433481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:04.433492 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:04.433502 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-07 01:07:04.433509 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:04.433519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:04.433525 | orchestrator | 2026-03-07 01:07:04.433532 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-07 01:07:04.433538 | orchestrator | Saturday 07 March 2026 01:03:20 +0000 (0:00:04.932) 0:00:09.344 ******** 2026-03-07 01:07:04.433545 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 01:07:04.433552 | orchestrator | 2026-03-07 01:07:04.433558 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-03-07 01:07:04.433564 | orchestrator | Saturday 07 March 2026 01:03:23 +0000 (0:00:02.550) 0:00:11.894 ******** 2026-03-07 01:07:04.433581 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-07 01:07:04.433588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-07 01:07:04.433605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-07 01:07:04.433613 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-07 01:07:04.433623 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-07 01:07:04.433630 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-07 01:07:04.433640 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-07 01:07:04.433646 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-07 01:07:04.433652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:04.433663 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-07 01:07:04.433672 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:04.433979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:04.434002 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-07 01:07:04.434051 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-07 01:07:04.434060 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-07 01:07:04.434065 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-07 01:07:04.434084 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:04.434109 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:04.434115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:04.434146 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-07 01:07:04.434152 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-07 01:07:04.434162 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-07 01:07:04.434169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-07 01:07:04.434179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-07 01:07:04.434184 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-07 01:07:04.434193 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:04.434198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:04.434205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:04.434214 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:04.434222 | orchestrator | 2026-03-07 01:07:04.434231 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-03-07 01:07:04.434245 | orchestrator | Saturday 07 March 2026 01:03:30 +0000 (0:00:07.781) 0:00:19.675 ******** 2026-03-07 01:07:04.434256 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-07 01:07:04.434272 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-07 01:07:04.434280 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-07 01:07:04.434296 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-07 01:07:04.434305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-07 01:07:04.434317 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 01:07:04.434326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 01:07:04.434340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 01:07:04.434349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-07 01:07:04.434358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 01:07:04.434366 | orchestrator | skipping: [testbed-manager] 2026-03-07 01:07:04.434381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-07 01:07:04.434390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 01:07:04.434399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 01:07:04.434413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-07 01:07:04.434427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 01:07:04.434433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-07 01:07:04.434438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 01:07:04.434596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 01:07:04.434617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-07 01:07:04.434623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 01:07:04.434695 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:07:04.434702 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:07:04.434708 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:07:04.434717 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-07 01:07:04.434730 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-07 01:07:04.434736 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-07 01:07:04.434741 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:07:04.434746 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-07 01:07:04.434752 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-07 01:07:04.434773 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-07 01:07:04.434779 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:07:04.434784 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-07 01:07:04.434793 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-07 01:07:04.434803 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-07 01:07:04.434808 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:07:04.434813 | orchestrator | 2026-03-07 01:07:04.434819 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-03-07 01:07:04.434824 | orchestrator | Saturday 07 March 2026 01:03:32 +0000 (0:00:01.509) 0:00:21.185 ******** 2026-03-07 01:07:04.434830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-07 01:07:04.434835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 01:07:04.434841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 01:07:04.434858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-07 01:07:04.434865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 01:07:04.434875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-07 01:07:04.434883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 01:07:04.434889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 01:07:04.434894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-07 01:07:04.434900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 01:07:04.434917 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-07 01:07:04.434923 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-07 01:07:04.434935 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-07 01:07:04.434945 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-07 01:07:04.434951 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 01:07:04.434956 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:07:04.434962 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:07:04.434967 | orchestrator | skipping: [testbed-manager] 2026-03-07 01:07:04.434972 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-07 01:07:04.434978 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-07 01:07:04.434996 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-07 01:07:04.435006 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:07:04.435011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-07 01:07:04.435017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 01:07:04.435068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 01:07:04.435074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-07 01:07:04.435311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-07 01:07:04.435326 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:07:04.435331 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-07 01:07:04.435337 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-07 01:07:04.435364 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-07 01:07:04.435370 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:07:04.435376 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-07 01:07:04.435386 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-07 01:07:04.435391 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-07 01:07:04.435397 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:07:04.435402 | orchestrator | 2026-03-07 01:07:04.435407 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-03-07 01:07:04.435413 | orchestrator | Saturday 07 March 2026 01:03:34 +0000 (0:00:02.205) 0:00:23.390 ******** 2026-03-07 01:07:04.435419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-07 01:07:04.435424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-07 01:07:04.435441 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-07 01:07:04.435452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-07 01:07:04.435457 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-07 01:07:04.435465 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-07 01:07:04.435471 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-07 01:07:04.435476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:04.435482 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-07 01:07:04.435487 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:04.435513 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:04.435523 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-07 01:07:04.435533 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-07 01:07:04.435546 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-07 01:07:04.435555 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:04.435563 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-07 01:07:04.435571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:04.435586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:04.435615 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-07 01:07:04.435624 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-07 01:07:04.435637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-07 01:07:04.435646 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-07 01:07:04.435655 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-07 01:07:04.435665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-07 01:07:04.435847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-07 01:07:04.435866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:04.435875 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:04.435890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:04.435899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:04.435908 | orchestrator | 2026-03-07 01:07:04.435917 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-03-07 01:07:04.435927 | orchestrator | Saturday 07 March 2026 01:03:40 +0000 (0:00:06.272) 0:00:29.663 ******** 2026-03-07 01:07:04.435936 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-07 01:07:04.435942 | orchestrator | 2026-03-07 01:07:04.435948 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-03-07 01:07:04.435953 | orchestrator | Saturday 07 March 2026 01:03:42 +0000 (0:00:01.595) 0:00:31.259 ******** 2026-03-07 01:07:04.435959 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094423, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.447144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.435971 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094423, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.447144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436051 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094423, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.447144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436064 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094423, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.447144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436073 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094423, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.447144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436108 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094423, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.447144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436118 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094451, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4511063, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436129 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094451, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4511063, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436141 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094451, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4511063, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436167 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094451, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4511063, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436174 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094451, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4511063, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436179 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094415, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.446278, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436188 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094415, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.446278, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436193 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094415, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.446278, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436203 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094442, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4497495, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436209 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094415, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.446278, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436233 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094415, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.446278, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436242 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094423, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.447144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-07 01:07:04.436249 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094451, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4511063, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436262 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094442, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4497495, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436270 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094415, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.446278, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436282 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094442, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4497495, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436291 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094410, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.445005, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436320 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094442, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4497495, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436329 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094410, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.445005, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436337 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094442, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4497495, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436351 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094426, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4475076, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436357 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094442, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4497495, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436368 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094410, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.445005, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436373 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094410, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.445005, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436395 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094440, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.449302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436401 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094426, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4475076, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436406 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094426, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4475076, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436415 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094426, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4475076, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436424 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094410, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.445005, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436429 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094410, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.445005, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436435 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094451, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4511063, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-07 01:07:04.436441 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094426, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4475076, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436461 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094440, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.449302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436468 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094440, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.449302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436476 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094431, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4481642, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436486 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094426, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4475076, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436492 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094440, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.449302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436497 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094440, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.449302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436502 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094431, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4481642, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436523 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094431, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4481642, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436529 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094431, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4481642, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436538 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094431, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4481642, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436548 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094421, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4464922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436553 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094421, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4464922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436559 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094421, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4464922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436564 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094449, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4509294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436585 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094405, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4434922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436591 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094415, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.446278, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-07 01:07:04.436600 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094440, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.449302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436611 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094449, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4509294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436616 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094464, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4534922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436621 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094421, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4464922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436627 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094449, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4509294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436648 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094421, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4464922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436655 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094447, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4506602, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436667 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094449, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4509294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436674 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094431, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4481642, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436680 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094405, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4434922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436686 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094405, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4434922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436692 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094464, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4534922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436715 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094449, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4509294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436723 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094464, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4534922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436736 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094412, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.445268, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436742 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094447, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4506602, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436749 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094405, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4434922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436757 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094442, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4497495, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-07 01:07:04.436766 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094421, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4464922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436805 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094412, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.445268, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436816 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094447, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4506602, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436835 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094405, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4434922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436844 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094464, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4534922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436852 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094408, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4447339, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436861 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094408, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4447339, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436870 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094436, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.448492, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436884 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094449, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4509294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436900 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094434, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.448492, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436914 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094412, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.445268, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436923 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094447, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4506602, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436932 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094464, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4534922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436939 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094436, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.448492, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.436946 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094410, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.445005, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-07 01:07:04.436959 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094405, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4434922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.437038 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094463, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.452621, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.437049 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:07:04.437063 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094408, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4447339, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.437070 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094412, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.445268, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.437078 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094434, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.448492, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.437084 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094436, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.448492, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.437114 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094447, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4506602, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.437128 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094408, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4447339, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.437140 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094412, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.445268, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.437151 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094464, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4534922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.437158 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094434, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.448492, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.437164 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094463, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.452621, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.437171 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:07:04.437178 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094408, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4447339, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.437184 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094436, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.448492, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.437197 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094463, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.452621, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.437205 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094426, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4475076, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-07 01:07:04.437211 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:07:04.437221 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094447, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4506602, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.437228 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094436, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.448492, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.437234 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094434, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.448492, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.437241 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094463, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.452621, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.437247 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:07:04.437253 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094434, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.448492, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.437273 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094412, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.445268, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.437280 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094408, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4447339, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.437290 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094440, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.449302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-07 01:07:04.437296 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094463, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.452621, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.437302 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:07:04.437308 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094436, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.448492, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.437314 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094434, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.448492, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.437321 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094463, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.452621, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-07 01:07:04.437331 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:07:04.437341 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094431, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4481642, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-07 01:07:04.437347 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094421, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4464922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-07 01:07:04.437357 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094449, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4509294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-07 01:07:04.437363 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094405, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4434922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-07 01:07:04.437370 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094464, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4534922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-07 01:07:04.437376 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094447, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4506602, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-07 01:07:04.437386 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094412, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.445268, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-07 01:07:04.437395 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094408, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4447339, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-07 01:07:04.437402 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094436, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.448492, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-07 01:07:04.437412 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094434, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.448492, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-07 01:07:04.437418 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094463, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.452621, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-07 01:07:04.437488 | orchestrator | 2026-03-07 01:07:04.437496 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-03-07 01:07:04.437504 | orchestrator | Saturday 07 March 2026 01:04:16 +0000 (0:00:34.427) 0:01:05.686 ******** 2026-03-07 01:07:04.437513 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-07 01:07:04.437521 | orchestrator | 2026-03-07 01:07:04.437534 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-03-07 01:07:04.437545 | orchestrator | Saturday 07 March 2026 01:04:17 +0000 (0:00:01.123) 0:01:06.810 ******** 2026-03-07 01:07:04.437554 | orchestrator | [WARNING]: Skipped 2026-03-07 01:07:04.437563 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-07 01:07:04.437573 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-03-07 01:07:04.437588 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-07 01:07:04.437598 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-03-07 01:07:04.437608 | orchestrator | [WARNING]: Skipped 2026-03-07 01:07:04.437617 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-07 01:07:04.437626 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-03-07 01:07:04.437634 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-07 01:07:04.437642 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-03-07 01:07:04.437648 | orchestrator | [WARNING]: Skipped 2026-03-07 01:07:04.437654 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-07 01:07:04.437660 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-03-07 01:07:04.437667 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-07 01:07:04.437673 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-03-07 01:07:04.437679 | orchestrator | [WARNING]: Skipped 2026-03-07 01:07:04.437686 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-07 01:07:04.437692 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-03-07 01:07:04.437699 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-07 01:07:04.437705 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-03-07 01:07:04.437711 | orchestrator | [WARNING]: Skipped 2026-03-07 01:07:04.437717 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-07 01:07:04.437724 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-03-07 01:07:04.437730 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-07 01:07:04.437741 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-03-07 01:07:04.437748 | orchestrator | [WARNING]: Skipped 2026-03-07 01:07:04.437755 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-07 01:07:04.437761 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-03-07 01:07:04.437767 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-07 01:07:04.437773 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-03-07 01:07:04.437780 | orchestrator | [WARNING]: Skipped 2026-03-07 01:07:04.437786 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-07 01:07:04.437793 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-03-07 01:07:04.437799 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-07 01:07:04.437807 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-03-07 01:07:04.437816 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-07 01:07:04.437824 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-07 01:07:04.437833 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-07 01:07:04.437843 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-07 01:07:04.437849 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-07 01:07:04.437855 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-07 01:07:04.437862 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-07 01:07:04.437868 | orchestrator | 2026-03-07 01:07:04.437875 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-03-07 01:07:04.437881 | orchestrator | Saturday 07 March 2026 01:04:20 +0000 (0:00:02.909) 0:01:09.719 ******** 2026-03-07 01:07:04.437888 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-07 01:07:04.437894 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:07:04.437905 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-07 01:07:04.437917 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:07:04.437923 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-07 01:07:04.437930 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:07:04.437937 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-07 01:07:04.437943 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:07:04.437950 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-07 01:07:04.437956 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:07:04.437963 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-07 01:07:04.437969 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:07:04.437975 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-03-07 01:07:04.437982 | orchestrator | 2026-03-07 01:07:04.437988 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-03-07 01:07:04.437994 | orchestrator | Saturday 07 March 2026 01:04:43 +0000 (0:00:22.319) 0:01:32.039 ******** 2026-03-07 01:07:04.438001 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-07 01:07:04.438007 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-07 01:07:04.438036 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:07:04.438044 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:07:04.438051 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-07 01:07:04.438057 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:07:04.438063 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-07 01:07:04.438069 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:07:04.438075 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-07 01:07:04.438081 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:07:04.438087 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-07 01:07:04.438110 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:07:04.438117 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-03-07 01:07:04.438123 | orchestrator | 2026-03-07 01:07:04.438130 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-03-07 01:07:04.438136 | orchestrator | Saturday 07 March 2026 01:04:48 +0000 (0:00:04.984) 0:01:37.023 ******** 2026-03-07 01:07:04.438143 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-07 01:07:04.438151 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-07 01:07:04.438157 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:07:04.438164 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-03-07 01:07:04.438170 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-07 01:07:04.438177 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:07:04.438183 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:07:04.438194 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-07 01:07:04.438201 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:07:04.438208 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-07 01:07:04.438219 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:07:04.438225 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-07 01:07:04.438231 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:07:04.438237 | orchestrator | 2026-03-07 01:07:04.438244 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-03-07 01:07:04.438250 | orchestrator | Saturday 07 March 2026 01:04:51 +0000 (0:00:03.413) 0:01:40.437 ******** 2026-03-07 01:07:04.438256 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-07 01:07:04.438263 | orchestrator | 2026-03-07 01:07:04.438269 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-03-07 01:07:04.438275 | orchestrator | Saturday 07 March 2026 01:04:52 +0000 (0:00:01.257) 0:01:41.695 ******** 2026-03-07 01:07:04.438282 | orchestrator | skipping: [testbed-manager] 2026-03-07 01:07:04.438288 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:07:04.438295 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:07:04.438301 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:07:04.438307 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:07:04.438314 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:07:04.438319 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:07:04.438326 | orchestrator | 2026-03-07 01:07:04.438332 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-03-07 01:07:04.438338 | orchestrator | Saturday 07 March 2026 01:04:54 +0000 (0:00:01.370) 0:01:43.066 ******** 2026-03-07 01:07:04.438349 | orchestrator | skipping: [testbed-manager] 2026-03-07 01:07:04.438355 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:07:04.438362 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:07:04.438368 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:07:04.438374 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:07:04.438380 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:07:04.438386 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:07:04.438392 | orchestrator | 2026-03-07 01:07:04.438398 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-03-07 01:07:04.438405 | orchestrator | Saturday 07 March 2026 01:04:57 +0000 (0:00:03.158) 0:01:46.224 ******** 2026-03-07 01:07:04.438412 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-07 01:07:04.438418 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-07 01:07:04.438425 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:07:04.438431 | orchestrator | skipping: [testbed-manager] 2026-03-07 01:07:04.438437 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-07 01:07:04.438443 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:07:04.438450 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-07 01:07:04.438456 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:07:04.438463 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-07 01:07:04.438469 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:07:04.438475 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-07 01:07:04.438481 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:07:04.438488 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-07 01:07:04.438494 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:07:04.438500 | orchestrator | 2026-03-07 01:07:04.438506 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-03-07 01:07:04.438513 | orchestrator | Saturday 07 March 2026 01:04:59 +0000 (0:00:02.321) 0:01:48.546 ******** 2026-03-07 01:07:04.438519 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-07 01:07:04.438530 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-07 01:07:04.438537 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:07:04.438543 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:07:04.438549 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-07 01:07:04.438556 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:07:04.438563 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-07 01:07:04.438569 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-03-07 01:07:04.438576 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:07:04.438582 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-07 01:07:04.438588 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:07:04.438594 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-07 01:07:04.438601 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:07:04.438607 | orchestrator | 2026-03-07 01:07:04.438614 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-03-07 01:07:04.438623 | orchestrator | Saturday 07 March 2026 01:05:01 +0000 (0:00:02.176) 0:01:50.722 ******** 2026-03-07 01:07:04.438628 | orchestrator | [WARNING]: Skipped 2026-03-07 01:07:04.438633 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-03-07 01:07:04.438639 | orchestrator | due to this access issue: 2026-03-07 01:07:04.438644 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-03-07 01:07:04.438650 | orchestrator | not a directory 2026-03-07 01:07:04.438655 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-07 01:07:04.438660 | orchestrator | 2026-03-07 01:07:04.438665 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-03-07 01:07:04.438670 | orchestrator | Saturday 07 March 2026 01:05:03 +0000 (0:00:01.885) 0:01:52.608 ******** 2026-03-07 01:07:04.438675 | orchestrator | skipping: [testbed-manager] 2026-03-07 01:07:04.438680 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:07:04.438685 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:07:04.438691 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:07:04.438696 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:07:04.438701 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:07:04.438706 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:07:04.438711 | orchestrator | 2026-03-07 01:07:04.438716 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-03-07 01:07:04.438721 | orchestrator | Saturday 07 March 2026 01:05:04 +0000 (0:00:00.886) 0:01:53.495 ******** 2026-03-07 01:07:04.438726 | orchestrator | skipping: [testbed-manager] 2026-03-07 01:07:04.438731 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:07:04.438736 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:07:04.438741 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:07:04.438746 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:07:04.438752 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:07:04.438757 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:07:04.438762 | orchestrator | 2026-03-07 01:07:04.438768 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-03-07 01:07:04.438776 | orchestrator | Saturday 07 March 2026 01:05:05 +0000 (0:00:01.101) 0:01:54.596 ******** 2026-03-07 01:07:04.438783 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-07 01:07:04.438797 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-07 01:07:04.438803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-07 01:07:04.438808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-07 01:07:04.438818 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-07 01:07:04.438823 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-07 01:07:04.438829 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-07 01:07:04.438838 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-07 01:07:04.438850 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-07 01:07:04.438856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:04.438861 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-07 01:07:04.438867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:04.438877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:04.438882 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-07 01:07:04.438888 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-07 01:07:04.438897 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-07 01:07:04.438985 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-07 01:07:04.439016 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:04.439026 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:04.439034 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:04.439051 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-07 01:07:04.439066 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-07 01:07:04.439087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-07 01:07:04.439123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-07 01:07:04.439132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-07 01:07:04.439141 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:04.439151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:04.439156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:04.439162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-07 01:07:04.439185 | orchestrator | 2026-03-07 01:07:04.439191 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-03-07 01:07:04.439196 | orchestrator | Saturday 07 March 2026 01:05:10 +0000 (0:00:05.007) 0:01:59.604 ******** 2026-03-07 01:07:04.439202 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-07 01:07:04.439211 | orchestrator | skipping: [testbed-manager] 2026-03-07 01:07:04.439216 | orchestrator | 2026-03-07 01:07:04.439221 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-07 01:07:04.439226 | orchestrator | Saturday 07 March 2026 01:05:12 +0000 (0:00:01.451) 0:02:01.055 ******** 2026-03-07 01:07:04.439232 | orchestrator | 2026-03-07 01:07:04.439237 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-07 01:07:04.439242 | orchestrator | Saturday 07 March 2026 01:05:12 +0000 (0:00:00.080) 0:02:01.136 ******** 2026-03-07 01:07:04.439247 | orchestrator | 2026-03-07 01:07:04.439252 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-07 01:07:04.439257 | orchestrator | Saturday 07 March 2026 01:05:12 +0000 (0:00:00.080) 0:02:01.217 ******** 2026-03-07 01:07:04.439262 | orchestrator | 2026-03-07 01:07:04.439267 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-07 01:07:04.439273 | orchestrator | Saturday 07 March 2026 01:05:12 +0000 (0:00:00.077) 0:02:01.294 ******** 2026-03-07 01:07:04.439278 | orchestrator | 2026-03-07 01:07:04.439283 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-07 01:07:04.439289 | orchestrator | Saturday 07 March 2026 01:05:12 +0000 (0:00:00.432) 0:02:01.727 ******** 2026-03-07 01:07:04.439294 | orchestrator | 2026-03-07 01:07:04.439299 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-07 01:07:04.439304 | orchestrator | Saturday 07 March 2026 01:05:12 +0000 (0:00:00.078) 0:02:01.805 ******** 2026-03-07 01:07:04.439309 | orchestrator | 2026-03-07 01:07:04.439315 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-07 01:07:04.439320 | orchestrator | Saturday 07 March 2026 01:05:13 +0000 (0:00:00.245) 0:02:02.050 ******** 2026-03-07 01:07:04.439325 | orchestrator | 2026-03-07 01:07:04.439330 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-03-07 01:07:04.439335 | orchestrator | Saturday 07 March 2026 01:05:13 +0000 (0:00:00.217) 0:02:02.268 ******** 2026-03-07 01:07:04.439340 | orchestrator | changed: [testbed-manager] 2026-03-07 01:07:04.439346 | orchestrator | 2026-03-07 01:07:04.439351 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-03-07 01:07:04.439356 | orchestrator | Saturday 07 March 2026 01:05:37 +0000 (0:00:23.653) 0:02:25.921 ******** 2026-03-07 01:07:04.439361 | orchestrator | changed: [testbed-node-4] 2026-03-07 01:07:04.439367 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:07:04.439372 | orchestrator | changed: [testbed-node-3] 2026-03-07 01:07:04.439377 | orchestrator | changed: [testbed-manager] 2026-03-07 01:07:04.439382 | orchestrator | changed: [testbed-node-5] 2026-03-07 01:07:04.439387 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:07:04.439392 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:07:04.439398 | orchestrator | 2026-03-07 01:07:04.439403 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-03-07 01:07:04.439408 | orchestrator | Saturday 07 March 2026 01:05:51 +0000 (0:00:14.363) 0:02:40.285 ******** 2026-03-07 01:07:04.439413 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:07:04.439418 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:07:04.439423 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:07:04.439428 | orchestrator | 2026-03-07 01:07:04.439434 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-03-07 01:07:04.439439 | orchestrator | Saturday 07 March 2026 01:05:56 +0000 (0:00:05.137) 0:02:45.423 ******** 2026-03-07 01:07:04.439449 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:07:04.439454 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:07:04.439459 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:07:04.439465 | orchestrator | 2026-03-07 01:07:04.439470 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-03-07 01:07:04.439475 | orchestrator | Saturday 07 March 2026 01:06:07 +0000 (0:00:10.902) 0:02:56.325 ******** 2026-03-07 01:07:04.439480 | orchestrator | changed: [testbed-node-3] 2026-03-07 01:07:04.439485 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:07:04.439491 | orchestrator | changed: [testbed-node-4] 2026-03-07 01:07:04.439496 | orchestrator | changed: [testbed-manager] 2026-03-07 01:07:04.439501 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:07:04.439507 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:07:04.439516 | orchestrator | changed: [testbed-node-5] 2026-03-07 01:07:04.439522 | orchestrator | 2026-03-07 01:07:04.439527 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-03-07 01:07:04.439532 | orchestrator | Saturday 07 March 2026 01:06:22 +0000 (0:00:15.487) 0:03:11.813 ******** 2026-03-07 01:07:04.439537 | orchestrator | changed: [testbed-manager] 2026-03-07 01:07:04.439542 | orchestrator | 2026-03-07 01:07:04.439548 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-03-07 01:07:04.439553 | orchestrator | Saturday 07 March 2026 01:06:31 +0000 (0:00:08.475) 0:03:20.289 ******** 2026-03-07 01:07:04.439558 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:07:04.439563 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:07:04.439568 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:07:04.439574 | orchestrator | 2026-03-07 01:07:04.439579 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-03-07 01:07:04.439584 | orchestrator | Saturday 07 March 2026 01:06:43 +0000 (0:00:12.324) 0:03:32.613 ******** 2026-03-07 01:07:04.439589 | orchestrator | changed: [testbed-manager] 2026-03-07 01:07:04.439594 | orchestrator | 2026-03-07 01:07:04.439600 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-03-07 01:07:04.439605 | orchestrator | Saturday 07 March 2026 01:06:54 +0000 (0:00:10.629) 0:03:43.243 ******** 2026-03-07 01:07:04.439610 | orchestrator | changed: [testbed-node-4] 2026-03-07 01:07:04.439615 | orchestrator | changed: [testbed-node-3] 2026-03-07 01:07:04.439620 | orchestrator | changed: [testbed-node-5] 2026-03-07 01:07:04.439625 | orchestrator | 2026-03-07 01:07:04.439630 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 01:07:04.439636 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-07 01:07:04.439646 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-07 01:07:04.439651 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-07 01:07:04.439657 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-07 01:07:04.439662 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-07 01:07:04.439667 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-07 01:07:04.439672 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-07 01:07:04.439677 | orchestrator | 2026-03-07 01:07:04.439683 | orchestrator | 2026-03-07 01:07:04.439688 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 01:07:04.439697 | orchestrator | Saturday 07 March 2026 01:07:01 +0000 (0:00:06.872) 0:03:50.115 ******** 2026-03-07 01:07:04.439703 | orchestrator | =============================================================================== 2026-03-07 01:07:04.439708 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 34.43s 2026-03-07 01:07:04.439713 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 23.65s 2026-03-07 01:07:04.439719 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 22.32s 2026-03-07 01:07:04.439724 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 15.49s 2026-03-07 01:07:04.439729 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 14.36s 2026-03-07 01:07:04.439734 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 12.32s 2026-03-07 01:07:04.439739 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 10.90s 2026-03-07 01:07:04.439744 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------ 10.63s 2026-03-07 01:07:04.439749 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 8.48s 2026-03-07 01:07:04.439754 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 7.78s 2026-03-07 01:07:04.439760 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 6.87s 2026-03-07 01:07:04.439765 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.27s 2026-03-07 01:07:04.439770 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 5.14s 2026-03-07 01:07:04.439775 | orchestrator | prometheus : Check prometheus containers -------------------------------- 5.01s 2026-03-07 01:07:04.439780 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 4.98s 2026-03-07 01:07:04.439785 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 4.93s 2026-03-07 01:07:04.439790 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 3.41s 2026-03-07 01:07:04.439795 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 3.16s 2026-03-07 01:07:04.439800 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 2.91s 2026-03-07 01:07:04.439806 | orchestrator | prometheus : include_tasks ---------------------------------------------- 2.55s 2026-03-07 01:07:04.439814 | orchestrator | 2026-03-07 01:07:04 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:07:04.439820 | orchestrator | 2026-03-07 01:07:04 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:07:04.439825 | orchestrator | 2026-03-07 01:07:04 | INFO  | Task 09a13776-4083-42a2-97d8-4b444a9c84ff is in state STARTED 2026-03-07 01:07:04.439830 | orchestrator | 2026-03-07 01:07:04 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:07:07.481907 | orchestrator | 2026-03-07 01:07:07 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:07:07.483635 | orchestrator | 2026-03-07 01:07:07 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:07:07.485560 | orchestrator | 2026-03-07 01:07:07 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:07:07.487350 | orchestrator | 2026-03-07 01:07:07 | INFO  | Task 09a13776-4083-42a2-97d8-4b444a9c84ff is in state STARTED 2026-03-07 01:07:07.487411 | orchestrator | 2026-03-07 01:07:07 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:07:10.525488 | orchestrator | 2026-03-07 01:07:10 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:07:10.526995 | orchestrator | 2026-03-07 01:07:10 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:07:10.529983 | orchestrator | 2026-03-07 01:07:10 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:07:10.530214 | orchestrator | 2026-03-07 01:07:10 | INFO  | Task 09a13776-4083-42a2-97d8-4b444a9c84ff is in state STARTED 2026-03-07 01:07:10.530232 | orchestrator | 2026-03-07 01:07:10 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:07:13.572426 | orchestrator | 2026-03-07 01:07:13 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:07:13.574397 | orchestrator | 2026-03-07 01:07:13 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:07:13.576406 | orchestrator | 2026-03-07 01:07:13 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:07:13.579972 | orchestrator | 2026-03-07 01:07:13 | INFO  | Task 09a13776-4083-42a2-97d8-4b444a9c84ff is in state STARTED 2026-03-07 01:07:13.580010 | orchestrator | 2026-03-07 01:07:13 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:07:16.621698 | orchestrator | 2026-03-07 01:07:16 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:07:16.626432 | orchestrator | 2026-03-07 01:07:16 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:07:16.628664 | orchestrator | 2026-03-07 01:07:16 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:07:16.629835 | orchestrator | 2026-03-07 01:07:16 | INFO  | Task 09a13776-4083-42a2-97d8-4b444a9c84ff is in state STARTED 2026-03-07 01:07:16.630304 | orchestrator | 2026-03-07 01:07:16 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:07:19.674814 | orchestrator | 2026-03-07 01:07:19 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:07:19.676766 | orchestrator | 2026-03-07 01:07:19 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:07:19.679828 | orchestrator | 2026-03-07 01:07:19 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:07:19.681220 | orchestrator | 2026-03-07 01:07:19 | INFO  | Task 09a13776-4083-42a2-97d8-4b444a9c84ff is in state STARTED 2026-03-07 01:07:19.681280 | orchestrator | 2026-03-07 01:07:19 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:07:22.726618 | orchestrator | 2026-03-07 01:07:22 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:07:22.727239 | orchestrator | 2026-03-07 01:07:22 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:07:22.729943 | orchestrator | 2026-03-07 01:07:22 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:07:22.731009 | orchestrator | 2026-03-07 01:07:22 | INFO  | Task 09a13776-4083-42a2-97d8-4b444a9c84ff is in state STARTED 2026-03-07 01:07:22.731024 | orchestrator | 2026-03-07 01:07:22 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:07:25.787599 | orchestrator | 2026-03-07 01:07:25 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:07:25.789045 | orchestrator | 2026-03-07 01:07:25 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:07:25.793954 | orchestrator | 2026-03-07 01:07:25 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:07:25.796717 | orchestrator | 2026-03-07 01:07:25 | INFO  | Task 09a13776-4083-42a2-97d8-4b444a9c84ff is in state STARTED 2026-03-07 01:07:25.797009 | orchestrator | 2026-03-07 01:07:25 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:07:28.838042 | orchestrator | 2026-03-07 01:07:28 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:07:28.840203 | orchestrator | 2026-03-07 01:07:28 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:07:28.842350 | orchestrator | 2026-03-07 01:07:28 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:07:28.844774 | orchestrator | 2026-03-07 01:07:28 | INFO  | Task 09a13776-4083-42a2-97d8-4b444a9c84ff is in state STARTED 2026-03-07 01:07:28.844817 | orchestrator | 2026-03-07 01:07:28 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:07:31.878728 | orchestrator | 2026-03-07 01:07:31 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:07:31.881369 | orchestrator | 2026-03-07 01:07:31 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:07:31.887593 | orchestrator | 2026-03-07 01:07:31 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:07:31.889002 | orchestrator | 2026-03-07 01:07:31 | INFO  | Task 09a13776-4083-42a2-97d8-4b444a9c84ff is in state STARTED 2026-03-07 01:07:31.891497 | orchestrator | 2026-03-07 01:07:31 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:07:34.937322 | orchestrator | 2026-03-07 01:07:34 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:07:34.937859 | orchestrator | 2026-03-07 01:07:34 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:07:34.938531 | orchestrator | 2026-03-07 01:07:34 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:07:34.939494 | orchestrator | 2026-03-07 01:07:34 | INFO  | Task 09a13776-4083-42a2-97d8-4b444a9c84ff is in state STARTED 2026-03-07 01:07:34.939680 | orchestrator | 2026-03-07 01:07:34 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:07:37.980407 | orchestrator | 2026-03-07 01:07:37 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:07:37.981687 | orchestrator | 2026-03-07 01:07:37 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:07:37.982665 | orchestrator | 2026-03-07 01:07:37 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:07:37.988954 | orchestrator | 2026-03-07 01:07:37 | INFO  | Task 09a13776-4083-42a2-97d8-4b444a9c84ff is in state STARTED 2026-03-07 01:07:37.989043 | orchestrator | 2026-03-07 01:07:37 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:07:41.034274 | orchestrator | 2026-03-07 01:07:41 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:07:41.036413 | orchestrator | 2026-03-07 01:07:41 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:07:41.037048 | orchestrator | 2026-03-07 01:07:41 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:07:41.037955 | orchestrator | 2026-03-07 01:07:41 | INFO  | Task 09a13776-4083-42a2-97d8-4b444a9c84ff is in state STARTED 2026-03-07 01:07:41.038058 | orchestrator | 2026-03-07 01:07:41 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:07:44.086293 | orchestrator | 2026-03-07 01:07:44 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:07:44.087545 | orchestrator | 2026-03-07 01:07:44 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:07:44.088519 | orchestrator | 2026-03-07 01:07:44 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:07:44.089615 | orchestrator | 2026-03-07 01:07:44 | INFO  | Task 09a13776-4083-42a2-97d8-4b444a9c84ff is in state STARTED 2026-03-07 01:07:44.089651 | orchestrator | 2026-03-07 01:07:44 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:07:47.128325 | orchestrator | 2026-03-07 01:07:47 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:07:47.128415 | orchestrator | 2026-03-07 01:07:47 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:07:47.128965 | orchestrator | 2026-03-07 01:07:47 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:07:47.129927 | orchestrator | 2026-03-07 01:07:47 | INFO  | Task 09a13776-4083-42a2-97d8-4b444a9c84ff is in state STARTED 2026-03-07 01:07:47.129983 | orchestrator | 2026-03-07 01:07:47 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:07:50.167880 | orchestrator | 2026-03-07 01:07:50 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:07:50.168912 | orchestrator | 2026-03-07 01:07:50 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:07:50.172374 | orchestrator | 2026-03-07 01:07:50 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:07:50.173002 | orchestrator | 2026-03-07 01:07:50 | INFO  | Task 09a13776-4083-42a2-97d8-4b444a9c84ff is in state STARTED 2026-03-07 01:07:50.173040 | orchestrator | 2026-03-07 01:07:50 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:07:53.217849 | orchestrator | 2026-03-07 01:07:53 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:07:53.219774 | orchestrator | 2026-03-07 01:07:53 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:07:53.220860 | orchestrator | 2026-03-07 01:07:53 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:07:53.221740 | orchestrator | 2026-03-07 01:07:53 | INFO  | Task 09a13776-4083-42a2-97d8-4b444a9c84ff is in state STARTED 2026-03-07 01:07:53.223957 | orchestrator | 2026-03-07 01:07:53 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:07:56.269102 | orchestrator | 2026-03-07 01:07:56 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:07:56.269780 | orchestrator | 2026-03-07 01:07:56 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:07:56.271218 | orchestrator | 2026-03-07 01:07:56 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:07:56.272396 | orchestrator | 2026-03-07 01:07:56 | INFO  | Task 09a13776-4083-42a2-97d8-4b444a9c84ff is in state STARTED 2026-03-07 01:07:56.272474 | orchestrator | 2026-03-07 01:07:56 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:07:59.313487 | orchestrator | 2026-03-07 01:07:59 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:07:59.314816 | orchestrator | 2026-03-07 01:07:59 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:07:59.317599 | orchestrator | 2026-03-07 01:07:59 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:07:59.318674 | orchestrator | 2026-03-07 01:07:59 | INFO  | Task 09a13776-4083-42a2-97d8-4b444a9c84ff is in state STARTED 2026-03-07 01:07:59.318707 | orchestrator | 2026-03-07 01:07:59 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:08:02.351022 | orchestrator | 2026-03-07 01:08:02 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:08:02.354564 | orchestrator | 2026-03-07 01:08:02 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:08:02.355606 | orchestrator | 2026-03-07 01:08:02 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:08:02.356684 | orchestrator | 2026-03-07 01:08:02 | INFO  | Task 09a13776-4083-42a2-97d8-4b444a9c84ff is in state STARTED 2026-03-07 01:08:02.356810 | orchestrator | 2026-03-07 01:08:02 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:08:05.392688 | orchestrator | 2026-03-07 01:08:05 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:08:05.393742 | orchestrator | 2026-03-07 01:08:05 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:08:05.394964 | orchestrator | 2026-03-07 01:08:05 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:08:05.396266 | orchestrator | 2026-03-07 01:08:05 | INFO  | Task 09a13776-4083-42a2-97d8-4b444a9c84ff is in state STARTED 2026-03-07 01:08:05.396340 | orchestrator | 2026-03-07 01:08:05 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:08:08.430454 | orchestrator | 2026-03-07 01:08:08 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:08:08.432285 | orchestrator | 2026-03-07 01:08:08 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:08:08.433351 | orchestrator | 2026-03-07 01:08:08 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:08:08.434583 | orchestrator | 2026-03-07 01:08:08 | INFO  | Task 09a13776-4083-42a2-97d8-4b444a9c84ff is in state STARTED 2026-03-07 01:08:08.435017 | orchestrator | 2026-03-07 01:08:08 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:08:11.475763 | orchestrator | 2026-03-07 01:08:11 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:08:11.476712 | orchestrator | 2026-03-07 01:08:11 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:08:11.477717 | orchestrator | 2026-03-07 01:08:11 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:08:11.478632 | orchestrator | 2026-03-07 01:08:11 | INFO  | Task 09a13776-4083-42a2-97d8-4b444a9c84ff is in state STARTED 2026-03-07 01:08:11.479040 | orchestrator | 2026-03-07 01:08:11 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:08:14.515325 | orchestrator | 2026-03-07 01:08:14 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:08:14.515455 | orchestrator | 2026-03-07 01:08:14 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:08:14.527971 | orchestrator | 2026-03-07 01:08:14 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:08:14.528067 | orchestrator | 2026-03-07 01:08:14 | INFO  | Task 09a13776-4083-42a2-97d8-4b444a9c84ff is in state STARTED 2026-03-07 01:08:14.528102 | orchestrator | 2026-03-07 01:08:14 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:08:17.556974 | orchestrator | 2026-03-07 01:08:17 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:08:17.557802 | orchestrator | 2026-03-07 01:08:17 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:08:17.558617 | orchestrator | 2026-03-07 01:08:17 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:08:17.559714 | orchestrator | 2026-03-07 01:08:17 | INFO  | Task 09a13776-4083-42a2-97d8-4b444a9c84ff is in state STARTED 2026-03-07 01:08:17.559836 | orchestrator | 2026-03-07 01:08:17 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:08:20.644663 | orchestrator | 2026-03-07 01:08:20 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:08:20.645779 | orchestrator | 2026-03-07 01:08:20 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:08:20.646747 | orchestrator | 2026-03-07 01:08:20 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:08:20.647751 | orchestrator | 2026-03-07 01:08:20 | INFO  | Task 09a13776-4083-42a2-97d8-4b444a9c84ff is in state STARTED 2026-03-07 01:08:20.647779 | orchestrator | 2026-03-07 01:08:20 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:08:23.691612 | orchestrator | 2026-03-07 01:08:23 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:08:23.692717 | orchestrator | 2026-03-07 01:08:23 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:08:23.694553 | orchestrator | 2026-03-07 01:08:23 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:08:23.696714 | orchestrator | 2026-03-07 01:08:23 | INFO  | Task 09a13776-4083-42a2-97d8-4b444a9c84ff is in state STARTED 2026-03-07 01:08:23.696791 | orchestrator | 2026-03-07 01:08:23 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:08:26.736679 | orchestrator | 2026-03-07 01:08:26 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:08:26.737798 | orchestrator | 2026-03-07 01:08:26 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:08:26.739220 | orchestrator | 2026-03-07 01:08:26 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:08:26.740366 | orchestrator | 2026-03-07 01:08:26 | INFO  | Task 09a13776-4083-42a2-97d8-4b444a9c84ff is in state STARTED 2026-03-07 01:08:26.740409 | orchestrator | 2026-03-07 01:08:26 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:08:29.832412 | orchestrator | 2026-03-07 01:08:29 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:08:29.832519 | orchestrator | 2026-03-07 01:08:29 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:08:29.832534 | orchestrator | 2026-03-07 01:08:29 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:08:29.832546 | orchestrator | 2026-03-07 01:08:29 | INFO  | Task 09a13776-4083-42a2-97d8-4b444a9c84ff is in state STARTED 2026-03-07 01:08:29.832558 | orchestrator | 2026-03-07 01:08:29 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:08:32.866368 | orchestrator | 2026-03-07 01:08:32 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:08:32.867388 | orchestrator | 2026-03-07 01:08:32 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:08:32.869387 | orchestrator | 2026-03-07 01:08:32 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:08:32.870777 | orchestrator | 2026-03-07 01:08:32 | INFO  | Task 09a13776-4083-42a2-97d8-4b444a9c84ff is in state STARTED 2026-03-07 01:08:32.870828 | orchestrator | 2026-03-07 01:08:32 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:08:35.911717 | orchestrator | 2026-03-07 01:08:35 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:08:35.913541 | orchestrator | 2026-03-07 01:08:35 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:08:35.914629 | orchestrator | 2026-03-07 01:08:35 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:08:35.915643 | orchestrator | 2026-03-07 01:08:35 | INFO  | Task 09a13776-4083-42a2-97d8-4b444a9c84ff is in state STARTED 2026-03-07 01:08:35.915707 | orchestrator | 2026-03-07 01:08:35 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:08:38.943661 | orchestrator | 2026-03-07 01:08:38 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:08:38.944043 | orchestrator | 2026-03-07 01:08:38 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:08:38.944953 | orchestrator | 2026-03-07 01:08:38 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:08:38.945619 | orchestrator | 2026-03-07 01:08:38 | INFO  | Task 09a13776-4083-42a2-97d8-4b444a9c84ff is in state STARTED 2026-03-07 01:08:38.945644 | orchestrator | 2026-03-07 01:08:38 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:08:41.985002 | orchestrator | 2026-03-07 01:08:41 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:08:41.985343 | orchestrator | 2026-03-07 01:08:41 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:08:41.986350 | orchestrator | 2026-03-07 01:08:41 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:08:41.988062 | orchestrator | 2026-03-07 01:08:41 | INFO  | Task 09a13776-4083-42a2-97d8-4b444a9c84ff is in state STARTED 2026-03-07 01:08:41.988203 | orchestrator | 2026-03-07 01:08:41 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:08:45.024376 | orchestrator | 2026-03-07 01:08:45 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:08:45.024941 | orchestrator | 2026-03-07 01:08:45 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:08:45.026293 | orchestrator | 2026-03-07 01:08:45 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:08:45.029120 | orchestrator | 2026-03-07 01:08:45 | INFO  | Task 09a13776-4083-42a2-97d8-4b444a9c84ff is in state STARTED 2026-03-07 01:08:45.029192 | orchestrator | 2026-03-07 01:08:45 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:08:48.062356 | orchestrator | 2026-03-07 01:08:48 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:08:48.063972 | orchestrator | 2026-03-07 01:08:48 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:08:48.065914 | orchestrator | 2026-03-07 01:08:48 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:08:48.067488 | orchestrator | 2026-03-07 01:08:48 | INFO  | Task 09a13776-4083-42a2-97d8-4b444a9c84ff is in state STARTED 2026-03-07 01:08:48.067677 | orchestrator | 2026-03-07 01:08:48 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:08:51.109043 | orchestrator | 2026-03-07 01:08:51 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:08:51.110635 | orchestrator | 2026-03-07 01:08:51 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:08:51.111803 | orchestrator | 2026-03-07 01:08:51 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:08:51.113317 | orchestrator | 2026-03-07 01:08:51 | INFO  | Task 09a13776-4083-42a2-97d8-4b444a9c84ff is in state STARTED 2026-03-07 01:08:51.113396 | orchestrator | 2026-03-07 01:08:51 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:08:54.149455 | orchestrator | 2026-03-07 01:08:54 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:08:54.150682 | orchestrator | 2026-03-07 01:08:54 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:08:54.154311 | orchestrator | 2026-03-07 01:08:54 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:08:54.156767 | orchestrator | 2026-03-07 01:08:54 | INFO  | Task 09a13776-4083-42a2-97d8-4b444a9c84ff is in state STARTED 2026-03-07 01:08:54.157487 | orchestrator | 2026-03-07 01:08:54 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:08:57.233699 | orchestrator | 2026-03-07 01:08:57 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:08:57.234363 | orchestrator | 2026-03-07 01:08:57 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:08:57.235285 | orchestrator | 2026-03-07 01:08:57 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:08:57.236241 | orchestrator | 2026-03-07 01:08:57 | INFO  | Task 09a13776-4083-42a2-97d8-4b444a9c84ff is in state STARTED 2026-03-07 01:08:57.236287 | orchestrator | 2026-03-07 01:08:57 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:09:00.280302 | orchestrator | 2026-03-07 01:09:00 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:09:00.281839 | orchestrator | 2026-03-07 01:09:00 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:09:00.282989 | orchestrator | 2026-03-07 01:09:00 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:09:00.283859 | orchestrator | 2026-03-07 01:09:00 | INFO  | Task 09a13776-4083-42a2-97d8-4b444a9c84ff is in state STARTED 2026-03-07 01:09:00.283889 | orchestrator | 2026-03-07 01:09:00 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:09:03.322945 | orchestrator | 2026-03-07 01:09:03 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:09:03.326492 | orchestrator | 2026-03-07 01:09:03 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:09:03.326559 | orchestrator | 2026-03-07 01:09:03 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:09:03.326564 | orchestrator | 2026-03-07 01:09:03 | INFO  | Task 09a13776-4083-42a2-97d8-4b444a9c84ff is in state STARTED 2026-03-07 01:09:03.326569 | orchestrator | 2026-03-07 01:09:03 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:09:06.374949 | orchestrator | 2026-03-07 01:09:06 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:09:06.375890 | orchestrator | 2026-03-07 01:09:06 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:09:06.378248 | orchestrator | 2026-03-07 01:09:06 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:09:06.379443 | orchestrator | 2026-03-07 01:09:06 | INFO  | Task 09a13776-4083-42a2-97d8-4b444a9c84ff is in state STARTED 2026-03-07 01:09:06.379701 | orchestrator | 2026-03-07 01:09:06 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:09:09.416473 | orchestrator | 2026-03-07 01:09:09 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:09:09.417377 | orchestrator | 2026-03-07 01:09:09 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:09:09.418675 | orchestrator | 2026-03-07 01:09:09 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:09:09.419582 | orchestrator | 2026-03-07 01:09:09 | INFO  | Task 09a13776-4083-42a2-97d8-4b444a9c84ff is in state STARTED 2026-03-07 01:09:09.419898 | orchestrator | 2026-03-07 01:09:09 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:09:12.461010 | orchestrator | 2026-03-07 01:09:12 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:09:12.461736 | orchestrator | 2026-03-07 01:09:12 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:09:12.464110 | orchestrator | 2026-03-07 01:09:12 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:09:12.465111 | orchestrator | 2026-03-07 01:09:12 | INFO  | Task 09a13776-4083-42a2-97d8-4b444a9c84ff is in state STARTED 2026-03-07 01:09:12.466526 | orchestrator | 2026-03-07 01:09:12 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:09:15.503461 | orchestrator | 2026-03-07 01:09:15 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:09:15.507864 | orchestrator | 2026-03-07 01:09:15 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:09:15.510603 | orchestrator | 2026-03-07 01:09:15 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:09:15.512263 | orchestrator | 2026-03-07 01:09:15 | INFO  | Task 09a13776-4083-42a2-97d8-4b444a9c84ff is in state STARTED 2026-03-07 01:09:15.512396 | orchestrator | 2026-03-07 01:09:15 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:09:18.563404 | orchestrator | 2026-03-07 01:09:18 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:09:18.564995 | orchestrator | 2026-03-07 01:09:18 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:09:18.566680 | orchestrator | 2026-03-07 01:09:18 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:09:18.568568 | orchestrator | 2026-03-07 01:09:18 | INFO  | Task 09a13776-4083-42a2-97d8-4b444a9c84ff is in state STARTED 2026-03-07 01:09:18.568627 | orchestrator | 2026-03-07 01:09:18 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:09:21.609562 | orchestrator | 2026-03-07 01:09:21 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:09:21.612657 | orchestrator | 2026-03-07 01:09:21 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:09:21.614538 | orchestrator | 2026-03-07 01:09:21 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:09:21.616129 | orchestrator | 2026-03-07 01:09:21 | INFO  | Task 09a13776-4083-42a2-97d8-4b444a9c84ff is in state STARTED 2026-03-07 01:09:21.616405 | orchestrator | 2026-03-07 01:09:21 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:09:24.706308 | orchestrator | 2026-03-07 01:09:24 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:09:24.708494 | orchestrator | 2026-03-07 01:09:24 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:09:24.709388 | orchestrator | 2026-03-07 01:09:24 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:09:24.712264 | orchestrator | 2026-03-07 01:09:24 | INFO  | Task 09a13776-4083-42a2-97d8-4b444a9c84ff is in state STARTED 2026-03-07 01:09:24.712329 | orchestrator | 2026-03-07 01:09:24 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:09:27.762383 | orchestrator | 2026-03-07 01:09:27 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:09:27.762549 | orchestrator | 2026-03-07 01:09:27 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:09:27.763185 | orchestrator | 2026-03-07 01:09:27 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:09:27.764235 | orchestrator | 2026-03-07 01:09:27 | INFO  | Task 09a13776-4083-42a2-97d8-4b444a9c84ff is in state STARTED 2026-03-07 01:09:27.764263 | orchestrator | 2026-03-07 01:09:27 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:09:30.797396 | orchestrator | 2026-03-07 01:09:30 | INFO  | Task d0fc9c2f-b26e-4644-b699-3a56028d6b40 is in state STARTED 2026-03-07 01:09:30.798658 | orchestrator | 2026-03-07 01:09:30 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:09:30.799120 | orchestrator | 2026-03-07 01:09:30 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:09:30.799993 | orchestrator | 2026-03-07 01:09:30 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:09:30.802253 | orchestrator | 2026-03-07 01:09:30 | INFO  | Task 09a13776-4083-42a2-97d8-4b444a9c84ff is in state SUCCESS 2026-03-07 01:09:30.804389 | orchestrator | 2026-03-07 01:09:30.804432 | orchestrator | 2026-03-07 01:09:30.804439 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-07 01:09:30.804444 | orchestrator | 2026-03-07 01:09:30.804449 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-07 01:09:30.804454 | orchestrator | Saturday 07 March 2026 01:06:59 +0000 (0:00:00.330) 0:00:00.330 ******** 2026-03-07 01:09:30.804458 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:09:30.804464 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:09:30.804468 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:09:30.804472 | orchestrator | 2026-03-07 01:09:30.804476 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-07 01:09:30.804480 | orchestrator | Saturday 07 March 2026 01:06:59 +0000 (0:00:00.309) 0:00:00.640 ******** 2026-03-07 01:09:30.804484 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-03-07 01:09:30.804489 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-03-07 01:09:30.804493 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-03-07 01:09:30.804497 | orchestrator | 2026-03-07 01:09:30.804500 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-03-07 01:09:30.804504 | orchestrator | 2026-03-07 01:09:30.804508 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-07 01:09:30.804512 | orchestrator | Saturday 07 March 2026 01:07:00 +0000 (0:00:00.431) 0:00:01.071 ******** 2026-03-07 01:09:30.804516 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:09:30.804520 | orchestrator | 2026-03-07 01:09:30.804524 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-03-07 01:09:30.804528 | orchestrator | Saturday 07 March 2026 01:07:00 +0000 (0:00:00.621) 0:00:01.693 ******** 2026-03-07 01:09:30.804532 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-03-07 01:09:30.804536 | orchestrator | 2026-03-07 01:09:30.804539 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-03-07 01:09:30.804543 | orchestrator | Saturday 07 March 2026 01:07:04 +0000 (0:00:03.913) 0:00:05.606 ******** 2026-03-07 01:09:30.804547 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-03-07 01:09:30.804551 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-03-07 01:09:30.804555 | orchestrator | 2026-03-07 01:09:30.804559 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-03-07 01:09:30.804563 | orchestrator | Saturday 07 March 2026 01:07:11 +0000 (0:00:07.298) 0:00:12.905 ******** 2026-03-07 01:09:30.804567 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-07 01:09:30.804571 | orchestrator | 2026-03-07 01:09:30.804587 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-03-07 01:09:30.804591 | orchestrator | Saturday 07 March 2026 01:07:15 +0000 (0:00:03.717) 0:00:16.622 ******** 2026-03-07 01:09:30.804595 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-07 01:09:30.804599 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-03-07 01:09:30.804617 | orchestrator | 2026-03-07 01:09:30.804621 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-03-07 01:09:30.804625 | orchestrator | Saturday 07 March 2026 01:07:19 +0000 (0:00:04.379) 0:00:21.002 ******** 2026-03-07 01:09:30.804629 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-07 01:09:30.804633 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-03-07 01:09:30.804637 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-03-07 01:09:30.804641 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-03-07 01:09:30.804645 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-03-07 01:09:30.804649 | orchestrator | 2026-03-07 01:09:30.804721 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-03-07 01:09:30.804725 | orchestrator | Saturday 07 March 2026 01:07:38 +0000 (0:00:18.433) 0:00:39.435 ******** 2026-03-07 01:09:30.804729 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-03-07 01:09:30.804733 | orchestrator | 2026-03-07 01:09:30.804736 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-03-07 01:09:30.804740 | orchestrator | Saturday 07 March 2026 01:07:42 +0000 (0:00:04.433) 0:00:43.869 ******** 2026-03-07 01:09:30.804747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-07 01:09:30.804764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-07 01:09:30.804769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-07 01:09:30.804782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-07 01:09:30.804786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-07 01:09:30.804791 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-07 01:09:30.804798 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:09:30.804804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:09:30.804808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:09:30.804816 | orchestrator | 2026-03-07 01:09:30.804820 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-03-07 01:09:30.804824 | orchestrator | Saturday 07 March 2026 01:07:45 +0000 (0:00:02.702) 0:00:46.572 ******** 2026-03-07 01:09:30.804827 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-03-07 01:09:30.804831 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-03-07 01:09:30.804835 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-03-07 01:09:30.804839 | orchestrator | 2026-03-07 01:09:30.804842 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-03-07 01:09:30.804849 | orchestrator | Saturday 07 March 2026 01:07:46 +0000 (0:00:01.205) 0:00:47.778 ******** 2026-03-07 01:09:30.804853 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:09:30.804857 | orchestrator | 2026-03-07 01:09:30.804861 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-03-07 01:09:30.804865 | orchestrator | Saturday 07 March 2026 01:07:46 +0000 (0:00:00.147) 0:00:47.925 ******** 2026-03-07 01:09:30.804868 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:09:30.804872 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:09:30.804877 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:09:30.804880 | orchestrator | 2026-03-07 01:09:30.804885 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-07 01:09:30.804890 | orchestrator | Saturday 07 March 2026 01:07:47 +0000 (0:00:00.653) 0:00:48.579 ******** 2026-03-07 01:09:30.804896 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:09:30.804902 | orchestrator | 2026-03-07 01:09:30.804908 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-03-07 01:09:30.804915 | orchestrator | Saturday 07 March 2026 01:07:49 +0000 (0:00:01.525) 0:00:50.104 ******** 2026-03-07 01:09:30.804919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-07 01:09:30.804929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-07 01:09:30.804934 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-07 01:09:30.804944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-07 01:09:30.804949 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-07 01:09:30.804953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-07 01:09:30.804957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:09:30.804964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:09:30.804972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:09:30.804976 | orchestrator | 2026-03-07 01:09:30.804981 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-03-07 01:09:30.804986 | orchestrator | Saturday 07 March 2026 01:07:54 +0000 (0:00:05.047) 0:00:55.152 ******** 2026-03-07 01:09:30.804993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-07 01:09:30.804998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-07 01:09:30.805003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-07 01:09:30.805008 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:09:30.805016 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-07 01:09:30.805025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-07 01:09:30.805029 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-07 01:09:30.805034 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:09:30.805041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-07 01:09:30.805046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-07 01:09:30.805051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-07 01:09:30.805056 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:09:30.805061 | orchestrator | 2026-03-07 01:09:30.805067 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-03-07 01:09:30.805072 | orchestrator | Saturday 07 March 2026 01:07:55 +0000 (0:00:01.742) 0:00:56.894 ******** 2026-03-07 01:09:30.805080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-07 01:09:30.805085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-07 01:09:30.805093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-07 01:09:30.805098 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:09:30.805105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-07 01:09:30.805112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-07 01:09:30.805129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-07 01:09:30.805253 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:09:30.805269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-07 01:09:30.805279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-07 01:09:30.805285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-07 01:09:30.805289 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:09:30.805294 | orchestrator | 2026-03-07 01:09:30.805299 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-03-07 01:09:30.805303 | orchestrator | Saturday 07 March 2026 01:07:57 +0000 (0:00:01.234) 0:00:58.128 ******** 2026-03-07 01:09:30.805308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-07 01:09:30.805323 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-ap2026-03-07 01:09:30 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:09:30.805530 | orchestrator | i:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-07 01:09:30.805540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-07 01:09:30.805549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-07 01:09:30.805553 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-07 01:09:30.805557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-07 01:09:30.805590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:09:30.805596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:09:30.805600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:09:30.805604 | orchestrator | 2026-03-07 01:09:30.805608 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-03-07 01:09:30.805612 | orchestrator | Saturday 07 March 2026 01:08:02 +0000 (0:00:05.625) 0:01:03.753 ******** 2026-03-07 01:09:30.805616 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:09:30.805620 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:09:30.805624 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:09:30.805628 | orchestrator | 2026-03-07 01:09:30.805632 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-03-07 01:09:30.805636 | orchestrator | Saturday 07 March 2026 01:08:07 +0000 (0:00:04.946) 0:01:08.700 ******** 2026-03-07 01:09:30.805643 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-07 01:09:30.805647 | orchestrator | 2026-03-07 01:09:30.805651 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-03-07 01:09:30.805654 | orchestrator | Saturday 07 March 2026 01:08:10 +0000 (0:00:02.518) 0:01:11.219 ******** 2026-03-07 01:09:30.805658 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:09:30.805662 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:09:30.805666 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:09:30.805670 | orchestrator | 2026-03-07 01:09:30.805674 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-03-07 01:09:30.805678 | orchestrator | Saturday 07 March 2026 01:08:11 +0000 (0:00:01.260) 0:01:12.479 ******** 2026-03-07 01:09:30.805682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-07 01:09:30.805693 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-07 01:09:30.805697 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-07 01:09:30.805701 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-07 01:09:30.805708 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-07 01:09:30.805712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-07 01:09:30.805719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:09:30.805725 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:09:30.805729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:09:30.805733 | orchestrator | 2026-03-07 01:09:30.805737 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-03-07 01:09:30.805741 | orchestrator | Saturday 07 March 2026 01:08:27 +0000 (0:00:16.525) 0:01:29.005 ******** 2026-03-07 01:09:30.805748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-07 01:09:30.805752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-07 01:09:30.805759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-07 01:09:30.805763 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:09:30.805769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-07 01:09:30.805774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-07 01:09:30.805778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-07 01:09:30.805782 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:09:30.805788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-07 01:09:30.805796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-07 01:09:30.805800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-07 01:09:30.805804 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:09:30.805808 | orchestrator | 2026-03-07 01:09:30.805812 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-03-07 01:09:30.805816 | orchestrator | Saturday 07 March 2026 01:08:30 +0000 (0:00:02.196) 0:01:31.202 ******** 2026-03-07 01:09:30.805822 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-07 01:09:30.805827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-07 01:09:30.805834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-07 01:09:30.805842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-07 01:09:30.805846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-07 01:09:30.805853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-07 01:09:30.805857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:09:30.805861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:09:30.805878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:09:30.805886 | orchestrator | 2026-03-07 01:09:30.805890 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-07 01:09:30.805894 | orchestrator | Saturday 07 March 2026 01:08:34 +0000 (0:00:04.803) 0:01:36.005 ******** 2026-03-07 01:09:30.805897 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:09:30.805901 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:09:30.805905 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:09:30.805909 | orchestrator | 2026-03-07 01:09:30.805913 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-03-07 01:09:30.805917 | orchestrator | Saturday 07 March 2026 01:08:35 +0000 (0:00:00.944) 0:01:36.950 ******** 2026-03-07 01:09:30.805920 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:09:30.805924 | orchestrator | 2026-03-07 01:09:30.805928 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-03-07 01:09:30.805932 | orchestrator | Saturday 07 March 2026 01:08:38 +0000 (0:00:02.830) 0:01:39.780 ******** 2026-03-07 01:09:30.805936 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:09:30.805940 | orchestrator | 2026-03-07 01:09:30.805944 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-03-07 01:09:30.805948 | orchestrator | Saturday 07 March 2026 01:08:41 +0000 (0:00:03.000) 0:01:42.781 ******** 2026-03-07 01:09:30.805952 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:09:30.805956 | orchestrator | 2026-03-07 01:09:30.805959 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-07 01:09:30.805963 | orchestrator | Saturday 07 March 2026 01:08:55 +0000 (0:00:13.869) 0:01:56.651 ******** 2026-03-07 01:09:30.805967 | orchestrator | 2026-03-07 01:09:30.805971 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-07 01:09:30.805975 | orchestrator | Saturday 07 March 2026 01:08:55 +0000 (0:00:00.239) 0:01:56.890 ******** 2026-03-07 01:09:30.805979 | orchestrator | 2026-03-07 01:09:30.805982 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-07 01:09:30.805986 | orchestrator | Saturday 07 March 2026 01:08:55 +0000 (0:00:00.154) 0:01:57.044 ******** 2026-03-07 01:09:30.805991 | orchestrator | 2026-03-07 01:09:30.805995 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-03-07 01:09:30.805999 | orchestrator | Saturday 07 March 2026 01:08:56 +0000 (0:00:00.216) 0:01:57.261 ******** 2026-03-07 01:09:30.806003 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:09:30.806007 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:09:30.806011 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:09:30.806047 | orchestrator | 2026-03-07 01:09:30.806053 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-03-07 01:09:30.806056 | orchestrator | Saturday 07 March 2026 01:09:05 +0000 (0:00:09.389) 0:02:06.651 ******** 2026-03-07 01:09:30.806060 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:09:30.806064 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:09:30.806072 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:09:30.806076 | orchestrator | 2026-03-07 01:09:30.806080 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-03-07 01:09:30.806084 | orchestrator | Saturday 07 March 2026 01:09:14 +0000 (0:00:08.746) 0:02:15.398 ******** 2026-03-07 01:09:30.806088 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:09:30.806092 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:09:30.806096 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:09:30.806099 | orchestrator | 2026-03-07 01:09:30.806103 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 01:09:30.806112 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-07 01:09:30.806117 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-07 01:09:30.806121 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-07 01:09:30.806125 | orchestrator | 2026-03-07 01:09:30.806129 | orchestrator | 2026-03-07 01:09:30.806177 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 01:09:30.806187 | orchestrator | Saturday 07 March 2026 01:09:28 +0000 (0:00:14.021) 0:02:29.419 ******** 2026-03-07 01:09:30.806192 | orchestrator | =============================================================================== 2026-03-07 01:09:30.806197 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 18.43s 2026-03-07 01:09:30.806202 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 16.53s 2026-03-07 01:09:30.806206 | orchestrator | barbican : Restart barbican-worker container --------------------------- 14.02s 2026-03-07 01:09:30.806211 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 13.87s 2026-03-07 01:09:30.806216 | orchestrator | barbican : Restart barbican-api container ------------------------------- 9.39s 2026-03-07 01:09:30.806221 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 8.75s 2026-03-07 01:09:30.806225 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 7.30s 2026-03-07 01:09:30.806230 | orchestrator | barbican : Copying over config.json files for services ------------------ 5.63s 2026-03-07 01:09:30.806234 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 5.05s 2026-03-07 01:09:30.806242 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 4.95s 2026-03-07 01:09:30.806247 | orchestrator | barbican : Check barbican containers ------------------------------------ 4.80s 2026-03-07 01:09:30.806252 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.43s 2026-03-07 01:09:30.806256 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.38s 2026-03-07 01:09:30.806261 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.91s 2026-03-07 01:09:30.806266 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.72s 2026-03-07 01:09:30.806271 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 3.00s 2026-03-07 01:09:30.806276 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.83s 2026-03-07 01:09:30.806280 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.70s 2026-03-07 01:09:30.806285 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 2.52s 2026-03-07 01:09:30.806290 | orchestrator | barbican : Copying over existing policy file ---------------------------- 2.20s 2026-03-07 01:09:33.912125 | orchestrator | 2026-03-07 01:09:33 | INFO  | Task d0fc9c2f-b26e-4644-b699-3a56028d6b40 is in state STARTED 2026-03-07 01:09:33.914164 | orchestrator | 2026-03-07 01:09:33 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:09:33.916677 | orchestrator | 2026-03-07 01:09:33 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:09:33.918971 | orchestrator | 2026-03-07 01:09:33 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:09:33.919050 | orchestrator | 2026-03-07 01:09:33 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:09:36.989508 | orchestrator | 2026-03-07 01:09:36 | INFO  | Task d0fc9c2f-b26e-4644-b699-3a56028d6b40 is in state STARTED 2026-03-07 01:09:36.991244 | orchestrator | 2026-03-07 01:09:36 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:09:36.992017 | orchestrator | 2026-03-07 01:09:36 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:09:36.992948 | orchestrator | 2026-03-07 01:09:36 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:09:36.992996 | orchestrator | 2026-03-07 01:09:36 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:09:40.081077 | orchestrator | 2026-03-07 01:09:40 | INFO  | Task d0fc9c2f-b26e-4644-b699-3a56028d6b40 is in state STARTED 2026-03-07 01:09:40.082072 | orchestrator | 2026-03-07 01:09:40 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:09:40.083603 | orchestrator | 2026-03-07 01:09:40 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:09:40.085467 | orchestrator | 2026-03-07 01:09:40 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:09:40.085523 | orchestrator | 2026-03-07 01:09:40 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:09:43.129681 | orchestrator | 2026-03-07 01:09:43 | INFO  | Task d0fc9c2f-b26e-4644-b699-3a56028d6b40 is in state STARTED 2026-03-07 01:09:43.129756 | orchestrator | 2026-03-07 01:09:43 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:09:43.129762 | orchestrator | 2026-03-07 01:09:43 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:09:43.133293 | orchestrator | 2026-03-07 01:09:43 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:09:43.133328 | orchestrator | 2026-03-07 01:09:43 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:09:46.169348 | orchestrator | 2026-03-07 01:09:46 | INFO  | Task d0fc9c2f-b26e-4644-b699-3a56028d6b40 is in state STARTED 2026-03-07 01:09:46.170500 | orchestrator | 2026-03-07 01:09:46 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:09:46.171443 | orchestrator | 2026-03-07 01:09:46 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:09:46.173169 | orchestrator | 2026-03-07 01:09:46 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:09:46.173242 | orchestrator | 2026-03-07 01:09:46 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:09:49.204322 | orchestrator | 2026-03-07 01:09:49 | INFO  | Task d0fc9c2f-b26e-4644-b699-3a56028d6b40 is in state STARTED 2026-03-07 01:09:49.204434 | orchestrator | 2026-03-07 01:09:49 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:09:49.205750 | orchestrator | 2026-03-07 01:09:49 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:09:49.206232 | orchestrator | 2026-03-07 01:09:49 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:09:49.206435 | orchestrator | 2026-03-07 01:09:49 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:09:52.247693 | orchestrator | 2026-03-07 01:09:52 | INFO  | Task d0fc9c2f-b26e-4644-b699-3a56028d6b40 is in state STARTED 2026-03-07 01:09:52.248229 | orchestrator | 2026-03-07 01:09:52 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:09:52.249245 | orchestrator | 2026-03-07 01:09:52 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:09:52.251804 | orchestrator | 2026-03-07 01:09:52 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:09:52.251849 | orchestrator | 2026-03-07 01:09:52 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:09:55.288786 | orchestrator | 2026-03-07 01:09:55 | INFO  | Task d0fc9c2f-b26e-4644-b699-3a56028d6b40 is in state STARTED 2026-03-07 01:09:55.292910 | orchestrator | 2026-03-07 01:09:55 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:09:55.292966 | orchestrator | 2026-03-07 01:09:55 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:09:55.292973 | orchestrator | 2026-03-07 01:09:55 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:09:55.292980 | orchestrator | 2026-03-07 01:09:55 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:09:58.330659 | orchestrator | 2026-03-07 01:09:58 | INFO  | Task d0fc9c2f-b26e-4644-b699-3a56028d6b40 is in state STARTED 2026-03-07 01:09:58.331523 | orchestrator | 2026-03-07 01:09:58 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:09:58.332923 | orchestrator | 2026-03-07 01:09:58 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:09:58.333617 | orchestrator | 2026-03-07 01:09:58 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:09:58.333645 | orchestrator | 2026-03-07 01:09:58 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:10:01.390194 | orchestrator | 2026-03-07 01:10:01 | INFO  | Task d0fc9c2f-b26e-4644-b699-3a56028d6b40 is in state STARTED 2026-03-07 01:10:01.390642 | orchestrator | 2026-03-07 01:10:01 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:10:01.391613 | orchestrator | 2026-03-07 01:10:01 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:10:01.394511 | orchestrator | 2026-03-07 01:10:01 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:10:01.394567 | orchestrator | 2026-03-07 01:10:01 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:10:04.425985 | orchestrator | 2026-03-07 01:10:04 | INFO  | Task d0fc9c2f-b26e-4644-b699-3a56028d6b40 is in state STARTED 2026-03-07 01:10:04.427056 | orchestrator | 2026-03-07 01:10:04 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:10:04.427555 | orchestrator | 2026-03-07 01:10:04 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:10:04.430529 | orchestrator | 2026-03-07 01:10:04 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:10:04.430603 | orchestrator | 2026-03-07 01:10:04 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:10:07.463278 | orchestrator | 2026-03-07 01:10:07 | INFO  | Task d0fc9c2f-b26e-4644-b699-3a56028d6b40 is in state STARTED 2026-03-07 01:10:07.463546 | orchestrator | 2026-03-07 01:10:07 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:10:07.465724 | orchestrator | 2026-03-07 01:10:07 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:10:07.466461 | orchestrator | 2026-03-07 01:10:07 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:10:07.466893 | orchestrator | 2026-03-07 01:10:07 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:10:10.641892 | orchestrator | 2026-03-07 01:10:10 | INFO  | Task d0fc9c2f-b26e-4644-b699-3a56028d6b40 is in state STARTED 2026-03-07 01:10:10.643765 | orchestrator | 2026-03-07 01:10:10 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:10:10.644819 | orchestrator | 2026-03-07 01:10:10 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:10:10.646626 | orchestrator | 2026-03-07 01:10:10 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:10:10.646696 | orchestrator | 2026-03-07 01:10:10 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:10:13.685018 | orchestrator | 2026-03-07 01:10:13 | INFO  | Task d0fc9c2f-b26e-4644-b699-3a56028d6b40 is in state STARTED 2026-03-07 01:10:13.685719 | orchestrator | 2026-03-07 01:10:13 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:10:13.686660 | orchestrator | 2026-03-07 01:10:13 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:10:13.687749 | orchestrator | 2026-03-07 01:10:13 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:10:13.687791 | orchestrator | 2026-03-07 01:10:13 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:10:16.735094 | orchestrator | 2026-03-07 01:10:16 | INFO  | Task d0fc9c2f-b26e-4644-b699-3a56028d6b40 is in state STARTED 2026-03-07 01:10:16.737700 | orchestrator | 2026-03-07 01:10:16 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:10:16.738292 | orchestrator | 2026-03-07 01:10:16 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:10:16.739204 | orchestrator | 2026-03-07 01:10:16 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:10:16.739253 | orchestrator | 2026-03-07 01:10:16 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:10:19.769110 | orchestrator | 2026-03-07 01:10:19 | INFO  | Task d0fc9c2f-b26e-4644-b699-3a56028d6b40 is in state STARTED 2026-03-07 01:10:19.769822 | orchestrator | 2026-03-07 01:10:19 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:10:19.770877 | orchestrator | 2026-03-07 01:10:19 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:10:19.771943 | orchestrator | 2026-03-07 01:10:19 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:10:19.771973 | orchestrator | 2026-03-07 01:10:19 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:10:22.802972 | orchestrator | 2026-03-07 01:10:22 | INFO  | Task d0fc9c2f-b26e-4644-b699-3a56028d6b40 is in state STARTED 2026-03-07 01:10:22.803263 | orchestrator | 2026-03-07 01:10:22 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:10:22.804376 | orchestrator | 2026-03-07 01:10:22 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:10:22.804972 | orchestrator | 2026-03-07 01:10:22 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:10:22.805325 | orchestrator | 2026-03-07 01:10:22 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:10:25.846404 | orchestrator | 2026-03-07 01:10:25 | INFO  | Task d0fc9c2f-b26e-4644-b699-3a56028d6b40 is in state STARTED 2026-03-07 01:10:25.847649 | orchestrator | 2026-03-07 01:10:25 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:10:25.848255 | orchestrator | 2026-03-07 01:10:25 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:10:25.849220 | orchestrator | 2026-03-07 01:10:25 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:10:25.849319 | orchestrator | 2026-03-07 01:10:25 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:10:28.885687 | orchestrator | 2026-03-07 01:10:28 | INFO  | Task d0fc9c2f-b26e-4644-b699-3a56028d6b40 is in state STARTED 2026-03-07 01:10:28.887931 | orchestrator | 2026-03-07 01:10:28 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:10:28.890109 | orchestrator | 2026-03-07 01:10:28 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:10:28.892519 | orchestrator | 2026-03-07 01:10:28 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:10:28.892596 | orchestrator | 2026-03-07 01:10:28 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:10:31.928754 | orchestrator | 2026-03-07 01:10:31 | INFO  | Task d0fc9c2f-b26e-4644-b699-3a56028d6b40 is in state STARTED 2026-03-07 01:10:31.928883 | orchestrator | 2026-03-07 01:10:31 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:10:31.929587 | orchestrator | 2026-03-07 01:10:31 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:10:31.930157 | orchestrator | 2026-03-07 01:10:31 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:10:31.930198 | orchestrator | 2026-03-07 01:10:31 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:10:34.976452 | orchestrator | 2026-03-07 01:10:34 | INFO  | Task d0fc9c2f-b26e-4644-b699-3a56028d6b40 is in state STARTED 2026-03-07 01:10:34.976635 | orchestrator | 2026-03-07 01:10:34 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:10:34.977957 | orchestrator | 2026-03-07 01:10:34 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:10:34.978656 | orchestrator | 2026-03-07 01:10:34 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:10:34.978733 | orchestrator | 2026-03-07 01:10:34 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:10:38.015528 | orchestrator | 2026-03-07 01:10:38 | INFO  | Task d0fc9c2f-b26e-4644-b699-3a56028d6b40 is in state STARTED 2026-03-07 01:10:38.015628 | orchestrator | 2026-03-07 01:10:38 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:10:38.017302 | orchestrator | 2026-03-07 01:10:38 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:10:38.018593 | orchestrator | 2026-03-07 01:10:38 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:10:38.018731 | orchestrator | 2026-03-07 01:10:38 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:10:41.067320 | orchestrator | 2026-03-07 01:10:41 | INFO  | Task d0fc9c2f-b26e-4644-b699-3a56028d6b40 is in state STARTED 2026-03-07 01:10:41.067422 | orchestrator | 2026-03-07 01:10:41 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:10:41.068209 | orchestrator | 2026-03-07 01:10:41 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:10:41.069835 | orchestrator | 2026-03-07 01:10:41 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:10:41.069864 | orchestrator | 2026-03-07 01:10:41 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:10:44.134826 | orchestrator | 2026-03-07 01:10:44 | INFO  | Task d0fc9c2f-b26e-4644-b699-3a56028d6b40 is in state STARTED 2026-03-07 01:10:44.136213 | orchestrator | 2026-03-07 01:10:44 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:10:44.139895 | orchestrator | 2026-03-07 01:10:44 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state STARTED 2026-03-07 01:10:44.140317 | orchestrator | 2026-03-07 01:10:44 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:10:44.140357 | orchestrator | 2026-03-07 01:10:44 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:10:47.180181 | orchestrator | 2026-03-07 01:10:47 | INFO  | Task d0fc9c2f-b26e-4644-b699-3a56028d6b40 is in state STARTED 2026-03-07 01:10:47.180414 | orchestrator | 2026-03-07 01:10:47 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:10:47.183058 | orchestrator | 2026-03-07 01:10:47 | INFO  | Task bb05ad0c-8f9a-46f0-a2f8-35cb17ce8317 is in state SUCCESS 2026-03-07 01:10:47.185469 | orchestrator | 2026-03-07 01:10:47.185540 | orchestrator | 2026-03-07 01:10:47.185550 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-07 01:10:47.185557 | orchestrator | 2026-03-07 01:10:47.185565 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-07 01:10:47.185574 | orchestrator | Saturday 07 March 2026 01:07:06 +0000 (0:00:00.301) 0:00:00.301 ******** 2026-03-07 01:10:47.185581 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:10:47.185588 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:10:47.185595 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:10:47.185601 | orchestrator | 2026-03-07 01:10:47.185606 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-07 01:10:47.185612 | orchestrator | Saturday 07 March 2026 01:07:06 +0000 (0:00:00.334) 0:00:00.636 ******** 2026-03-07 01:10:47.185620 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-03-07 01:10:47.185625 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-03-07 01:10:47.185629 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-03-07 01:10:47.185633 | orchestrator | 2026-03-07 01:10:47.185637 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-03-07 01:10:47.185641 | orchestrator | 2026-03-07 01:10:47.185645 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-07 01:10:47.185649 | orchestrator | Saturday 07 March 2026 01:07:07 +0000 (0:00:00.481) 0:00:01.117 ******** 2026-03-07 01:10:47.185653 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:10:47.185658 | orchestrator | 2026-03-07 01:10:47.185692 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-03-07 01:10:47.185696 | orchestrator | Saturday 07 March 2026 01:07:07 +0000 (0:00:00.596) 0:00:01.714 ******** 2026-03-07 01:10:47.185700 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-03-07 01:10:47.185703 | orchestrator | 2026-03-07 01:10:47.185719 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-03-07 01:10:47.185723 | orchestrator | Saturday 07 March 2026 01:07:11 +0000 (0:00:03.530) 0:00:05.244 ******** 2026-03-07 01:10:47.185727 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-03-07 01:10:47.185731 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-03-07 01:10:47.185735 | orchestrator | 2026-03-07 01:10:47.185739 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-03-07 01:10:47.185743 | orchestrator | Saturday 07 March 2026 01:07:18 +0000 (0:00:07.238) 0:00:12.483 ******** 2026-03-07 01:10:47.185747 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-07 01:10:47.185751 | orchestrator | 2026-03-07 01:10:47.185755 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-03-07 01:10:47.185759 | orchestrator | Saturday 07 March 2026 01:07:22 +0000 (0:00:03.846) 0:00:16.330 ******** 2026-03-07 01:10:47.185763 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-07 01:10:47.185766 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-03-07 01:10:47.185770 | orchestrator | 2026-03-07 01:10:47.185774 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-03-07 01:10:47.185778 | orchestrator | Saturday 07 March 2026 01:07:26 +0000 (0:00:04.372) 0:00:20.702 ******** 2026-03-07 01:10:47.185782 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-07 01:10:47.185786 | orchestrator | 2026-03-07 01:10:47.185804 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-03-07 01:10:47.185859 | orchestrator | Saturday 07 March 2026 01:07:31 +0000 (0:00:04.349) 0:00:25.052 ******** 2026-03-07 01:10:47.185864 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-03-07 01:10:47.185868 | orchestrator | 2026-03-07 01:10:47.185871 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-03-07 01:10:47.185875 | orchestrator | Saturday 07 March 2026 01:07:35 +0000 (0:00:04.575) 0:00:29.628 ******** 2026-03-07 01:10:47.185883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-07 01:10:47.185909 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-07 01:10:47.185914 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-07 01:10:47.185923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-07 01:10:47.185929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-07 01:10:47.185937 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-07 01:10:47.185942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.185951 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.185955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.185962 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.185968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.185975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.185979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.185983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.185989 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.185993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.186000 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.186008 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.186012 | orchestrator | 2026-03-07 01:10:47.186047 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-03-07 01:10:47.186052 | orchestrator | Saturday 07 March 2026 01:07:39 +0000 (0:00:03.568) 0:00:33.196 ******** 2026-03-07 01:10:47.186057 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:10:47.186062 | orchestrator | 2026-03-07 01:10:47.186066 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-03-07 01:10:47.186071 | orchestrator | Saturday 07 March 2026 01:07:39 +0000 (0:00:00.148) 0:00:33.345 ******** 2026-03-07 01:10:47.186075 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:10:47.186080 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:10:47.186084 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:10:47.186088 | orchestrator | 2026-03-07 01:10:47.186092 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-07 01:10:47.186097 | orchestrator | Saturday 07 March 2026 01:07:39 +0000 (0:00:00.301) 0:00:33.647 ******** 2026-03-07 01:10:47.186101 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:10:47.186124 | orchestrator | 2026-03-07 01:10:47.186144 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-03-07 01:10:47.186152 | orchestrator | Saturday 07 March 2026 01:07:40 +0000 (0:00:00.913) 0:00:34.560 ******** 2026-03-07 01:10:47.186162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-07 01:10:47.186167 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-07 01:10:47.186175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-07 01:10:47.186193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-07 01:10:47.186198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-07 01:10:47.186203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-07 01:10:47.186211 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.186216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.186227 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.186232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.186237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.186260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.186266 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.186274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.186279 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.186291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.186295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.186300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.186304 | orchestrator | 2026-03-07 01:10:47.186309 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-03-07 01:10:47.186314 | orchestrator | Saturday 07 March 2026 01:07:48 +0000 (0:00:07.403) 0:00:41.965 ******** 2026-03-07 01:10:47.186326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-07 01:10:47.186338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-07 01:10:47.186349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-07 01:10:47.186376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-07 01:10:47.186381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-07 01:10:47.186386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-07 01:10:47.186391 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:10:47.186396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-07 01:10:47.186403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-07 01:10:47.186412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-07 01:10:47.186421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-07 01:10:47.186428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-07 01:10:47.186434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-07 01:10:47.186441 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:10:47.186449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-07 01:10:47.186461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-07 01:10:47.186472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-07 01:10:47.186482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-07 01:10:47.186489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-07 01:10:47.186496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-07 01:10:47.186502 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:10:47.186509 | orchestrator | 2026-03-07 01:10:47.186515 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-03-07 01:10:47.186522 | orchestrator | Saturday 07 March 2026 01:07:51 +0000 (0:00:03.199) 0:00:45.164 ******** 2026-03-07 01:10:47.186528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-07 01:10:47.186541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-07 01:10:47.186548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-07 01:10:47.186559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-07 01:10:47.186566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-07 01:10:47.186573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-07 01:10:47.186579 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:10:47.186586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-07 01:10:47.186603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-07 01:10:47.186610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-07 01:10:47.186620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-07 01:10:47.186626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-07 01:10:47.186633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-07 01:10:47.186640 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:10:47.186647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-07 01:10:47.186662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-07 01:10:47.186670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-07 01:10:47.186718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-07 01:10:47.186725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-07 01:10:47.186731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-07 01:10:47.186738 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:10:47.186769 | orchestrator | 2026-03-07 01:10:47.186774 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-03-07 01:10:47.186778 | orchestrator | Saturday 07 March 2026 01:07:53 +0000 (0:00:02.451) 0:00:47.615 ******** 2026-03-07 01:10:47.186782 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-07 01:10:47.186793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-07 01:10:47.186813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-07 01:10:47.186817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-07 01:10:47.186822 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-07 01:10:47.186843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-07 01:10:47.186856 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.186860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.186868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.186872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.186876 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.186880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.186889 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.186897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.186901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.186908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.186912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.186916 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.186923 | orchestrator | 2026-03-07 01:10:47.186927 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-03-07 01:10:47.186931 | orchestrator | Saturday 07 March 2026 01:08:01 +0000 (0:00:08.061) 0:00:55.677 ******** 2026-03-07 01:10:47.186935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-07 01:10:47.186942 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-07 01:10:47.186949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-07 01:10:47.186953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-07 01:10:47.186957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-07 01:10:47.187002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-07 01:10:47.187006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.187014 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.187018 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.187024 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.187028 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.187036 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.187040 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.187046 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.187050 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.187057 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.187061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.187065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.187076 | orchestrator | 2026-03-07 01:10:47.187080 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-03-07 01:10:47.187084 | orchestrator | Saturday 07 March 2026 01:08:37 +0000 (0:00:35.150) 0:01:30.828 ******** 2026-03-07 01:10:47.187088 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-07 01:10:47.187092 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-07 01:10:47.187096 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-07 01:10:47.187099 | orchestrator | 2026-03-07 01:10:47.187103 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-03-07 01:10:47.187107 | orchestrator | Saturday 07 March 2026 01:08:44 +0000 (0:00:07.788) 0:01:38.616 ******** 2026-03-07 01:10:47.187111 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-07 01:10:47.187115 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-07 01:10:47.187118 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-07 01:10:47.187122 | orchestrator | 2026-03-07 01:10:47.187126 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-03-07 01:10:47.187173 | orchestrator | Saturday 07 March 2026 01:08:49 +0000 (0:00:04.952) 0:01:43.569 ******** 2026-03-07 01:10:47.187182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-07 01:10:47.187187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-07 01:10:47.187195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-07 01:10:47.187204 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-07 01:10:47.187208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-07 01:10:47.187212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-07 01:10:47.187219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-07 01:10:47.187223 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-07 01:10:47.187229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-07 01:10:47.187236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-07 01:10:47.187241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-07 01:10:47.187245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-07 01:10:47.187253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-07 01:10:47.187257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-07 01:10:47.187264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-07 01:10:47.187272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.187276 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.187280 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.187284 | orchestrator | 2026-03-07 01:10:47.187288 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-03-07 01:10:47.187292 | orchestrator | Saturday 07 March 2026 01:08:54 +0000 (0:00:04.735) 0:01:48.305 ******** 2026-03-07 01:10:47.187300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-07 01:10:47.187304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-07 01:10:47.187315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-07 01:10:47.187319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-07 01:10:47.187323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-07 01:10:47.187327 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-07 01:10:47.187334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-07 01:10:47.187504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-07 01:10:47.187525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-07 01:10:47.187532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-07 01:10:47.187539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-07 01:10:47.187546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-07 01:10:47.187553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-07 01:10:47.187560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-07 01:10:47.187578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-07 01:10:47.187588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.187595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.187602 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.187609 | orchestrator | 2026-03-07 01:10:47.187615 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-07 01:10:47.187623 | orchestrator | Saturday 07 March 2026 01:08:58 +0000 (0:00:04.185) 0:01:52.491 ******** 2026-03-07 01:10:47.187630 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:10:47.187636 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:10:47.187642 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:10:47.187648 | orchestrator | 2026-03-07 01:10:47.187655 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-03-07 01:10:47.187661 | orchestrator | Saturday 07 March 2026 01:09:00 +0000 (0:00:01.647) 0:01:54.138 ******** 2026-03-07 01:10:47.187668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-07 01:10:47.187683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-07 01:10:47.187693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-07 01:10:47.187700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-07 01:10:47.187712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-07 01:10:47.187720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-07 01:10:47.187726 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:10:47.187734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-07 01:10:47.187748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-07 01:10:47.187758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-07 01:10:47.187765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-07 01:10:47.187771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-07 01:10:47.187777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-07 01:10:47.187783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-07 01:10:47.187797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-07 01:10:47.187807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-07 01:10:47.187814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-07 01:10:47.187820 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:10:47.187825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-07 01:10:47.187830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-07 01:10:47.187834 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:10:47.187838 | orchestrator | 2026-03-07 01:10:47.187842 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-03-07 01:10:47.187845 | orchestrator | Saturday 07 March 2026 01:09:01 +0000 (0:00:01.094) 0:01:55.232 ******** 2026-03-07 01:10:47.187850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-07 01:10:47.187861 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-07 01:10:47.187868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-07 01:10:47.187873 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-07 01:10:47.187877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-07 01:10:47.187881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-07 01:10:47.187891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.187897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.187904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.187908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.187912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.187916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.187924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.187929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.187936 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.187940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.187944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.187948 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:10:47.187952 | orchestrator | 2026-03-07 01:10:47.187959 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-07 01:10:47.187963 | orchestrator | Saturday 07 March 2026 01:09:07 +0000 (0:00:06.215) 0:02:01.448 ******** 2026-03-07 01:10:47.187967 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:10:47.187971 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:10:47.187975 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:10:47.187978 | orchestrator | 2026-03-07 01:10:47.187982 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-03-07 01:10:47.187986 | orchestrator | Saturday 07 March 2026 01:09:07 +0000 (0:00:00.359) 0:02:01.807 ******** 2026-03-07 01:10:47.187990 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-03-07 01:10:47.187994 | orchestrator | 2026-03-07 01:10:47.187998 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-03-07 01:10:47.188001 | orchestrator | Saturday 07 March 2026 01:09:10 +0000 (0:00:02.511) 0:02:04.319 ******** 2026-03-07 01:10:47.188005 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-07 01:10:47.188009 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-03-07 01:10:47.188013 | orchestrator | 2026-03-07 01:10:47.188017 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-03-07 01:10:47.188021 | orchestrator | Saturday 07 March 2026 01:09:13 +0000 (0:00:02.755) 0:02:07.074 ******** 2026-03-07 01:10:47.188025 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:10:47.188028 | orchestrator | 2026-03-07 01:10:47.188032 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-07 01:10:47.188036 | orchestrator | Saturday 07 March 2026 01:09:33 +0000 (0:00:20.286) 0:02:27.360 ******** 2026-03-07 01:10:47.188040 | orchestrator | 2026-03-07 01:10:47.188044 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-07 01:10:47.188047 | orchestrator | Saturday 07 March 2026 01:09:33 +0000 (0:00:00.162) 0:02:27.522 ******** 2026-03-07 01:10:47.188051 | orchestrator | 2026-03-07 01:10:47.188055 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-07 01:10:47.188059 | orchestrator | Saturday 07 March 2026 01:09:34 +0000 (0:00:00.324) 0:02:27.846 ******** 2026-03-07 01:10:47.188062 | orchestrator | 2026-03-07 01:10:47.188066 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-03-07 01:10:47.188072 | orchestrator | Saturday 07 March 2026 01:09:34 +0000 (0:00:00.308) 0:02:28.154 ******** 2026-03-07 01:10:47.188076 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:10:47.188080 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:10:47.188084 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:10:47.188088 | orchestrator | 2026-03-07 01:10:47.188092 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-03-07 01:10:47.188095 | orchestrator | Saturday 07 March 2026 01:09:47 +0000 (0:00:13.446) 0:02:41.601 ******** 2026-03-07 01:10:47.188099 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:10:47.188103 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:10:47.188107 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:10:47.188111 | orchestrator | 2026-03-07 01:10:47.188114 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-03-07 01:10:47.188118 | orchestrator | Saturday 07 March 2026 01:09:59 +0000 (0:00:11.623) 0:02:53.224 ******** 2026-03-07 01:10:47.188122 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:10:47.188147 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:10:47.188152 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:10:47.188157 | orchestrator | 2026-03-07 01:10:47.188162 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-03-07 01:10:47.188166 | orchestrator | Saturday 07 March 2026 01:10:09 +0000 (0:00:10.468) 0:03:03.693 ******** 2026-03-07 01:10:47.188171 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:10:47.188175 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:10:47.188180 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:10:47.188184 | orchestrator | 2026-03-07 01:10:47.188194 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-03-07 01:10:47.188198 | orchestrator | Saturday 07 March 2026 01:10:17 +0000 (0:00:07.910) 0:03:11.603 ******** 2026-03-07 01:10:47.188203 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:10:47.188207 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:10:47.188212 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:10:47.188216 | orchestrator | 2026-03-07 01:10:47.188221 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-03-07 01:10:47.188225 | orchestrator | Saturday 07 March 2026 01:10:29 +0000 (0:00:12.110) 0:03:23.714 ******** 2026-03-07 01:10:47.188230 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:10:47.188235 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:10:47.188239 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:10:47.188243 | orchestrator | 2026-03-07 01:10:47.188248 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-03-07 01:10:47.188253 | orchestrator | Saturday 07 March 2026 01:10:36 +0000 (0:00:06.918) 0:03:30.632 ******** 2026-03-07 01:10:47.188257 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:10:47.188261 | orchestrator | 2026-03-07 01:10:47.188266 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 01:10:47.188271 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-07 01:10:47.188277 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-07 01:10:47.188282 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-07 01:10:47.188286 | orchestrator | 2026-03-07 01:10:47.188291 | orchestrator | 2026-03-07 01:10:47.188295 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 01:10:47.188300 | orchestrator | Saturday 07 March 2026 01:10:44 +0000 (0:00:07.931) 0:03:38.564 ******** 2026-03-07 01:10:47.188304 | orchestrator | =============================================================================== 2026-03-07 01:10:47.188309 | orchestrator | designate : Copying over designate.conf -------------------------------- 35.15s 2026-03-07 01:10:47.188313 | orchestrator | designate : Running Designate bootstrap container ---------------------- 20.29s 2026-03-07 01:10:47.188318 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 13.45s 2026-03-07 01:10:47.188322 | orchestrator | designate : Restart designate-mdns container --------------------------- 12.11s 2026-03-07 01:10:47.188326 | orchestrator | designate : Restart designate-api container ---------------------------- 11.62s 2026-03-07 01:10:47.188331 | orchestrator | designate : Restart designate-central container ------------------------ 10.47s 2026-03-07 01:10:47.188335 | orchestrator | designate : Copying over config.json files for services ----------------- 8.06s 2026-03-07 01:10:47.188339 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.93s 2026-03-07 01:10:47.188344 | orchestrator | designate : Restart designate-producer container ------------------------ 7.91s 2026-03-07 01:10:47.188349 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 7.79s 2026-03-07 01:10:47.188353 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 7.40s 2026-03-07 01:10:47.188358 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 7.24s 2026-03-07 01:10:47.188362 | orchestrator | designate : Restart designate-worker container -------------------------- 6.92s 2026-03-07 01:10:47.188367 | orchestrator | designate : Check designate containers ---------------------------------- 6.22s 2026-03-07 01:10:47.188372 | orchestrator | designate : Copying over named.conf ------------------------------------- 4.95s 2026-03-07 01:10:47.188376 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 4.74s 2026-03-07 01:10:47.188380 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.58s 2026-03-07 01:10:47.188388 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.37s 2026-03-07 01:10:47.188393 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 4.35s 2026-03-07 01:10:47.188398 | orchestrator | designate : Copying over rndc.key --------------------------------------- 4.19s 2026-03-07 01:10:47.188405 | orchestrator | 2026-03-07 01:10:47 | INFO  | Task 4bfc8129-7cbe-435c-8e80-c4247d75e517 is in state STARTED 2026-03-07 01:10:47.188410 | orchestrator | 2026-03-07 01:10:47 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:10:47.188414 | orchestrator | 2026-03-07 01:10:47 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:10:50.231192 | orchestrator | 2026-03-07 01:10:50 | INFO  | Task d0fc9c2f-b26e-4644-b699-3a56028d6b40 is in state STARTED 2026-03-07 01:10:50.234973 | orchestrator | 2026-03-07 01:10:50 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:10:50.238779 | orchestrator | 2026-03-07 01:10:50 | INFO  | Task 4bfc8129-7cbe-435c-8e80-c4247d75e517 is in state STARTED 2026-03-07 01:10:50.242981 | orchestrator | 2026-03-07 01:10:50 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:10:50.243034 | orchestrator | 2026-03-07 01:10:50 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:10:53.282318 | orchestrator | 2026-03-07 01:10:53 | INFO  | Task d0fc9c2f-b26e-4644-b699-3a56028d6b40 is in state STARTED 2026-03-07 01:10:53.284475 | orchestrator | 2026-03-07 01:10:53 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:10:53.286546 | orchestrator | 2026-03-07 01:10:53 | INFO  | Task 4bfc8129-7cbe-435c-8e80-c4247d75e517 is in state STARTED 2026-03-07 01:10:53.288090 | orchestrator | 2026-03-07 01:10:53 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:10:53.288762 | orchestrator | 2026-03-07 01:10:53 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:10:56.322229 | orchestrator | 2026-03-07 01:10:56 | INFO  | Task d0fc9c2f-b26e-4644-b699-3a56028d6b40 is in state SUCCESS 2026-03-07 01:10:56.322684 | orchestrator | 2026-03-07 01:10:56 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:10:56.323917 | orchestrator | 2026-03-07 01:10:56 | INFO  | Task 4bfc8129-7cbe-435c-8e80-c4247d75e517 is in state STARTED 2026-03-07 01:10:56.324853 | orchestrator | 2026-03-07 01:10:56 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:10:56.324902 | orchestrator | 2026-03-07 01:10:56 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:10:59.359233 | orchestrator | 2026-03-07 01:10:59 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:10:59.362886 | orchestrator | 2026-03-07 01:10:59 | INFO  | Task c0f376aa-8f1e-4684-ba39-02d7ce345895 is in state STARTED 2026-03-07 01:10:59.366519 | orchestrator | 2026-03-07 01:10:59 | INFO  | Task 4bfc8129-7cbe-435c-8e80-c4247d75e517 is in state STARTED 2026-03-07 01:10:59.369008 | orchestrator | 2026-03-07 01:10:59 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:10:59.369086 | orchestrator | 2026-03-07 01:10:59 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:11:02.400501 | orchestrator | 2026-03-07 01:11:02 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:11:02.401498 | orchestrator | 2026-03-07 01:11:02 | INFO  | Task c0f376aa-8f1e-4684-ba39-02d7ce345895 is in state STARTED 2026-03-07 01:11:02.402615 | orchestrator | 2026-03-07 01:11:02 | INFO  | Task 4bfc8129-7cbe-435c-8e80-c4247d75e517 is in state STARTED 2026-03-07 01:11:02.404628 | orchestrator | 2026-03-07 01:11:02 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:11:02.404656 | orchestrator | 2026-03-07 01:11:02 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:11:05.439628 | orchestrator | 2026-03-07 01:11:05 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:11:05.443100 | orchestrator | 2026-03-07 01:11:05 | INFO  | Task c0f376aa-8f1e-4684-ba39-02d7ce345895 is in state STARTED 2026-03-07 01:11:05.443207 | orchestrator | 2026-03-07 01:11:05 | INFO  | Task 4bfc8129-7cbe-435c-8e80-c4247d75e517 is in state STARTED 2026-03-07 01:11:05.450205 | orchestrator | 2026-03-07 01:11:05 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:11:05.450311 | orchestrator | 2026-03-07 01:11:05 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:11:08.494725 | orchestrator | 2026-03-07 01:11:08 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:11:08.494832 | orchestrator | 2026-03-07 01:11:08 | INFO  | Task c0f376aa-8f1e-4684-ba39-02d7ce345895 is in state STARTED 2026-03-07 01:11:08.494842 | orchestrator | 2026-03-07 01:11:08 | INFO  | Task 4bfc8129-7cbe-435c-8e80-c4247d75e517 is in state STARTED 2026-03-07 01:11:08.494849 | orchestrator | 2026-03-07 01:11:08 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:11:08.494857 | orchestrator | 2026-03-07 01:11:08 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:11:11.529032 | orchestrator | 2026-03-07 01:11:11 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:11:11.529521 | orchestrator | 2026-03-07 01:11:11 | INFO  | Task c0f376aa-8f1e-4684-ba39-02d7ce345895 is in state STARTED 2026-03-07 01:11:11.531053 | orchestrator | 2026-03-07 01:11:11 | INFO  | Task 4bfc8129-7cbe-435c-8e80-c4247d75e517 is in state STARTED 2026-03-07 01:11:11.532551 | orchestrator | 2026-03-07 01:11:11 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:11:11.532583 | orchestrator | 2026-03-07 01:11:11 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:11:14.575469 | orchestrator | 2026-03-07 01:11:14 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:11:14.576786 | orchestrator | 2026-03-07 01:11:14 | INFO  | Task c0f376aa-8f1e-4684-ba39-02d7ce345895 is in state STARTED 2026-03-07 01:11:14.578120 | orchestrator | 2026-03-07 01:11:14 | INFO  | Task 4bfc8129-7cbe-435c-8e80-c4247d75e517 is in state STARTED 2026-03-07 01:11:14.579741 | orchestrator | 2026-03-07 01:11:14 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:11:14.579791 | orchestrator | 2026-03-07 01:11:14 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:11:17.621526 | orchestrator | 2026-03-07 01:11:17 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:11:17.623408 | orchestrator | 2026-03-07 01:11:17 | INFO  | Task c0f376aa-8f1e-4684-ba39-02d7ce345895 is in state STARTED 2026-03-07 01:11:17.624825 | orchestrator | 2026-03-07 01:11:17 | INFO  | Task 4bfc8129-7cbe-435c-8e80-c4247d75e517 is in state STARTED 2026-03-07 01:11:17.626970 | orchestrator | 2026-03-07 01:11:17 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:11:17.627016 | orchestrator | 2026-03-07 01:11:17 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:11:20.667020 | orchestrator | 2026-03-07 01:11:20 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:11:20.667864 | orchestrator | 2026-03-07 01:11:20 | INFO  | Task c0f376aa-8f1e-4684-ba39-02d7ce345895 is in state STARTED 2026-03-07 01:11:20.669146 | orchestrator | 2026-03-07 01:11:20 | INFO  | Task 4bfc8129-7cbe-435c-8e80-c4247d75e517 is in state STARTED 2026-03-07 01:11:20.670346 | orchestrator | 2026-03-07 01:11:20 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:11:20.670388 | orchestrator | 2026-03-07 01:11:20 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:11:23.708871 | orchestrator | 2026-03-07 01:11:23 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:11:23.709309 | orchestrator | 2026-03-07 01:11:23 | INFO  | Task c0f376aa-8f1e-4684-ba39-02d7ce345895 is in state STARTED 2026-03-07 01:11:23.710175 | orchestrator | 2026-03-07 01:11:23 | INFO  | Task 4bfc8129-7cbe-435c-8e80-c4247d75e517 is in state STARTED 2026-03-07 01:11:23.711016 | orchestrator | 2026-03-07 01:11:23 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:11:23.711084 | orchestrator | 2026-03-07 01:11:23 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:11:26.759730 | orchestrator | 2026-03-07 01:11:26 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:11:26.763293 | orchestrator | 2026-03-07 01:11:26 | INFO  | Task c0f376aa-8f1e-4684-ba39-02d7ce345895 is in state STARTED 2026-03-07 01:11:26.766660 | orchestrator | 2026-03-07 01:11:26 | INFO  | Task 4bfc8129-7cbe-435c-8e80-c4247d75e517 is in state STARTED 2026-03-07 01:11:26.769688 | orchestrator | 2026-03-07 01:11:26 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:11:26.770197 | orchestrator | 2026-03-07 01:11:26 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:11:29.814434 | orchestrator | 2026-03-07 01:11:29 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:11:29.815881 | orchestrator | 2026-03-07 01:11:29 | INFO  | Task c0f376aa-8f1e-4684-ba39-02d7ce345895 is in state STARTED 2026-03-07 01:11:29.818260 | orchestrator | 2026-03-07 01:11:29 | INFO  | Task 4bfc8129-7cbe-435c-8e80-c4247d75e517 is in state STARTED 2026-03-07 01:11:29.819547 | orchestrator | 2026-03-07 01:11:29 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:11:29.819731 | orchestrator | 2026-03-07 01:11:29 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:11:32.857338 | orchestrator | 2026-03-07 01:11:32 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:11:32.857432 | orchestrator | 2026-03-07 01:11:32 | INFO  | Task c0f376aa-8f1e-4684-ba39-02d7ce345895 is in state STARTED 2026-03-07 01:11:32.858218 | orchestrator | 2026-03-07 01:11:32 | INFO  | Task 4bfc8129-7cbe-435c-8e80-c4247d75e517 is in state STARTED 2026-03-07 01:11:32.858845 | orchestrator | 2026-03-07 01:11:32 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:11:32.858894 | orchestrator | 2026-03-07 01:11:32 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:11:35.920185 | orchestrator | 2026-03-07 01:11:35 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:11:35.920510 | orchestrator | 2026-03-07 01:11:35 | INFO  | Task c0f376aa-8f1e-4684-ba39-02d7ce345895 is in state STARTED 2026-03-07 01:11:35.921290 | orchestrator | 2026-03-07 01:11:35 | INFO  | Task 4bfc8129-7cbe-435c-8e80-c4247d75e517 is in state STARTED 2026-03-07 01:11:35.923636 | orchestrator | 2026-03-07 01:11:35 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:11:35.923680 | orchestrator | 2026-03-07 01:11:35 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:11:38.957880 | orchestrator | 2026-03-07 01:11:38 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:11:38.958271 | orchestrator | 2026-03-07 01:11:38 | INFO  | Task c0f376aa-8f1e-4684-ba39-02d7ce345895 is in state STARTED 2026-03-07 01:11:38.959203 | orchestrator | 2026-03-07 01:11:38 | INFO  | Task 4bfc8129-7cbe-435c-8e80-c4247d75e517 is in state STARTED 2026-03-07 01:11:38.960287 | orchestrator | 2026-03-07 01:11:38 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:11:38.960330 | orchestrator | 2026-03-07 01:11:38 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:11:41.993224 | orchestrator | 2026-03-07 01:11:41 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:11:41.993722 | orchestrator | 2026-03-07 01:11:41 | INFO  | Task c0f376aa-8f1e-4684-ba39-02d7ce345895 is in state STARTED 2026-03-07 01:11:41.994563 | orchestrator | 2026-03-07 01:11:41 | INFO  | Task 4bfc8129-7cbe-435c-8e80-c4247d75e517 is in state STARTED 2026-03-07 01:11:41.995386 | orchestrator | 2026-03-07 01:11:41 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:11:41.995402 | orchestrator | 2026-03-07 01:11:41 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:11:45.062697 | orchestrator | 2026-03-07 01:11:45 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:11:45.064384 | orchestrator | 2026-03-07 01:11:45 | INFO  | Task c0f376aa-8f1e-4684-ba39-02d7ce345895 is in state STARTED 2026-03-07 01:11:45.065111 | orchestrator | 2026-03-07 01:11:45 | INFO  | Task 4bfc8129-7cbe-435c-8e80-c4247d75e517 is in state STARTED 2026-03-07 01:11:45.065941 | orchestrator | 2026-03-07 01:11:45 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:11:45.065978 | orchestrator | 2026-03-07 01:11:45 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:11:48.119778 | orchestrator | 2026-03-07 01:11:48 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:11:48.121471 | orchestrator | 2026-03-07 01:11:48 | INFO  | Task c0f376aa-8f1e-4684-ba39-02d7ce345895 is in state STARTED 2026-03-07 01:11:48.122224 | orchestrator | 2026-03-07 01:11:48 | INFO  | Task 4bfc8129-7cbe-435c-8e80-c4247d75e517 is in state STARTED 2026-03-07 01:11:48.123938 | orchestrator | 2026-03-07 01:11:48 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:11:48.124381 | orchestrator | 2026-03-07 01:11:48 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:11:51.161492 | orchestrator | 2026-03-07 01:11:51 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:11:51.163208 | orchestrator | 2026-03-07 01:11:51 | INFO  | Task c0f376aa-8f1e-4684-ba39-02d7ce345895 is in state STARTED 2026-03-07 01:11:51.164788 | orchestrator | 2026-03-07 01:11:51 | INFO  | Task 4bfc8129-7cbe-435c-8e80-c4247d75e517 is in state STARTED 2026-03-07 01:11:51.165626 | orchestrator | 2026-03-07 01:11:51 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:11:51.165674 | orchestrator | 2026-03-07 01:11:51 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:11:54.200751 | orchestrator | 2026-03-07 01:11:54 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:11:54.201051 | orchestrator | 2026-03-07 01:11:54 | INFO  | Task c0f376aa-8f1e-4684-ba39-02d7ce345895 is in state STARTED 2026-03-07 01:11:54.202303 | orchestrator | 2026-03-07 01:11:54 | INFO  | Task 4bfc8129-7cbe-435c-8e80-c4247d75e517 is in state STARTED 2026-03-07 01:11:54.203212 | orchestrator | 2026-03-07 01:11:54 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:11:54.203253 | orchestrator | 2026-03-07 01:11:54 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:11:57.237373 | orchestrator | 2026-03-07 01:11:57 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:11:57.238327 | orchestrator | 2026-03-07 01:11:57 | INFO  | Task c0f376aa-8f1e-4684-ba39-02d7ce345895 is in state STARTED 2026-03-07 01:11:57.239171 | orchestrator | 2026-03-07 01:11:57 | INFO  | Task 4bfc8129-7cbe-435c-8e80-c4247d75e517 is in state STARTED 2026-03-07 01:11:57.240109 | orchestrator | 2026-03-07 01:11:57 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:11:57.240145 | orchestrator | 2026-03-07 01:11:57 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:12:00.291247 | orchestrator | 2026-03-07 01:12:00 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:12:00.294073 | orchestrator | 2026-03-07 01:12:00 | INFO  | Task c0f376aa-8f1e-4684-ba39-02d7ce345895 is in state STARTED 2026-03-07 01:12:00.296314 | orchestrator | 2026-03-07 01:12:00 | INFO  | Task 4bfc8129-7cbe-435c-8e80-c4247d75e517 is in state STARTED 2026-03-07 01:12:00.297409 | orchestrator | 2026-03-07 01:12:00 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:12:00.297487 | orchestrator | 2026-03-07 01:12:00 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:12:03.346106 | orchestrator | 2026-03-07 01:12:03 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:12:03.348540 | orchestrator | 2026-03-07 01:12:03 | INFO  | Task c0f376aa-8f1e-4684-ba39-02d7ce345895 is in state STARTED 2026-03-07 01:12:03.350457 | orchestrator | 2026-03-07 01:12:03 | INFO  | Task 4bfc8129-7cbe-435c-8e80-c4247d75e517 is in state STARTED 2026-03-07 01:12:03.352069 | orchestrator | 2026-03-07 01:12:03 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:12:03.352113 | orchestrator | 2026-03-07 01:12:03 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:12:06.417398 | orchestrator | 2026-03-07 01:12:06 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:12:06.418355 | orchestrator | 2026-03-07 01:12:06 | INFO  | Task c0f376aa-8f1e-4684-ba39-02d7ce345895 is in state STARTED 2026-03-07 01:12:06.420370 | orchestrator | 2026-03-07 01:12:06 | INFO  | Task 4bfc8129-7cbe-435c-8e80-c4247d75e517 is in state STARTED 2026-03-07 01:12:06.423176 | orchestrator | 2026-03-07 01:12:06 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:12:06.423250 | orchestrator | 2026-03-07 01:12:06 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:12:09.457908 | orchestrator | 2026-03-07 01:12:09 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:12:09.458781 | orchestrator | 2026-03-07 01:12:09 | INFO  | Task c0f376aa-8f1e-4684-ba39-02d7ce345895 is in state STARTED 2026-03-07 01:12:09.459787 | orchestrator | 2026-03-07 01:12:09 | INFO  | Task 4bfc8129-7cbe-435c-8e80-c4247d75e517 is in state STARTED 2026-03-07 01:12:09.460864 | orchestrator | 2026-03-07 01:12:09 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:12:09.461406 | orchestrator | 2026-03-07 01:12:09 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:12:12.506818 | orchestrator | 2026-03-07 01:12:12 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:12:12.508341 | orchestrator | 2026-03-07 01:12:12 | INFO  | Task c0f376aa-8f1e-4684-ba39-02d7ce345895 is in state STARTED 2026-03-07 01:12:12.512142 | orchestrator | 2026-03-07 01:12:12 | INFO  | Task 4bfc8129-7cbe-435c-8e80-c4247d75e517 is in state STARTED 2026-03-07 01:12:12.512451 | orchestrator | 2026-03-07 01:12:12 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:12:12.512961 | orchestrator | 2026-03-07 01:12:12 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:12:15.560237 | orchestrator | 2026-03-07 01:12:15 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:12:15.564921 | orchestrator | 2026-03-07 01:12:15 | INFO  | Task c0f376aa-8f1e-4684-ba39-02d7ce345895 is in state STARTED 2026-03-07 01:12:15.566453 | orchestrator | 2026-03-07 01:12:15 | INFO  | Task 4bfc8129-7cbe-435c-8e80-c4247d75e517 is in state SUCCESS 2026-03-07 01:12:15.567949 | orchestrator | 2026-03-07 01:12:15.567986 | orchestrator | 2026-03-07 01:12:15.567994 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-03-07 01:12:15.567999 | orchestrator | 2026-03-07 01:12:15.568003 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-03-07 01:12:15.568007 | orchestrator | Saturday 07 March 2026 01:09:40 +0000 (0:00:00.266) 0:00:00.266 ******** 2026-03-07 01:12:15.568011 | orchestrator | changed: [localhost] 2026-03-07 01:12:15.568016 | orchestrator | 2026-03-07 01:12:15.568020 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-03-07 01:12:15.568024 | orchestrator | Saturday 07 March 2026 01:09:42 +0000 (0:00:01.622) 0:00:01.888 ******** 2026-03-07 01:12:15.568028 | orchestrator | changed: [localhost] 2026-03-07 01:12:15.568032 | orchestrator | 2026-03-07 01:12:15.568036 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-03-07 01:12:15.568039 | orchestrator | Saturday 07 March 2026 01:10:46 +0000 (0:01:03.416) 0:01:05.305 ******** 2026-03-07 01:12:15.568046 | orchestrator | changed: [localhost] 2026-03-07 01:12:15.568052 | orchestrator | 2026-03-07 01:12:15.568059 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-07 01:12:15.568065 | orchestrator | 2026-03-07 01:12:15.568071 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-07 01:12:15.568078 | orchestrator | Saturday 07 March 2026 01:10:53 +0000 (0:00:07.427) 0:01:12.733 ******** 2026-03-07 01:12:15.568084 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:12:15.568091 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:12:15.568098 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:12:15.568104 | orchestrator | 2026-03-07 01:12:15.568111 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-07 01:12:15.568135 | orchestrator | Saturday 07 March 2026 01:10:54 +0000 (0:00:00.869) 0:01:13.602 ******** 2026-03-07 01:12:15.568141 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-03-07 01:12:15.568148 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-03-07 01:12:15.568155 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-03-07 01:12:15.568160 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-03-07 01:12:15.568164 | orchestrator | 2026-03-07 01:12:15.568168 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-03-07 01:12:15.568172 | orchestrator | skipping: no hosts matched 2026-03-07 01:12:15.568176 | orchestrator | 2026-03-07 01:12:15.568180 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 01:12:15.568185 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 01:12:15.568190 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 01:12:15.568194 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 01:12:15.568210 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 01:12:15.568214 | orchestrator | 2026-03-07 01:12:15.568218 | orchestrator | 2026-03-07 01:12:15.568221 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 01:12:15.568225 | orchestrator | Saturday 07 March 2026 01:10:55 +0000 (0:00:01.165) 0:01:14.768 ******** 2026-03-07 01:12:15.568229 | orchestrator | =============================================================================== 2026-03-07 01:12:15.568233 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 63.42s 2026-03-07 01:12:15.568237 | orchestrator | Download ironic-agent kernel -------------------------------------------- 7.43s 2026-03-07 01:12:15.568240 | orchestrator | Ensure the destination directory exists --------------------------------- 1.62s 2026-03-07 01:12:15.568244 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.17s 2026-03-07 01:12:15.568248 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.87s 2026-03-07 01:12:15.568252 | orchestrator | 2026-03-07 01:12:15.568256 | orchestrator | 2026-03-07 01:12:15.568259 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-07 01:12:15.568263 | orchestrator | 2026-03-07 01:12:15.568267 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-07 01:12:15.568271 | orchestrator | Saturday 07 March 2026 01:10:52 +0000 (0:00:00.286) 0:00:00.286 ******** 2026-03-07 01:12:15.568275 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:12:15.568278 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:12:15.568282 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:12:15.568286 | orchestrator | 2026-03-07 01:12:15.568290 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-07 01:12:15.568294 | orchestrator | Saturday 07 March 2026 01:10:52 +0000 (0:00:00.351) 0:00:00.638 ******** 2026-03-07 01:12:15.568297 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-03-07 01:12:15.568301 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-03-07 01:12:15.568305 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-03-07 01:12:15.568309 | orchestrator | 2026-03-07 01:12:15.568313 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-03-07 01:12:15.568316 | orchestrator | 2026-03-07 01:12:15.568320 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-07 01:12:15.568324 | orchestrator | Saturday 07 March 2026 01:10:54 +0000 (0:00:01.218) 0:00:01.856 ******** 2026-03-07 01:12:15.568328 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:12:15.568332 | orchestrator | 2026-03-07 01:12:15.568358 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-03-07 01:12:15.568362 | orchestrator | Saturday 07 March 2026 01:10:55 +0000 (0:00:01.351) 0:00:03.208 ******** 2026-03-07 01:12:15.568375 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-03-07 01:12:15.568379 | orchestrator | 2026-03-07 01:12:15.568383 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-03-07 01:12:15.568389 | orchestrator | Saturday 07 March 2026 01:10:59 +0000 (0:00:04.203) 0:00:07.412 ******** 2026-03-07 01:12:15.568393 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-03-07 01:12:15.568397 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-03-07 01:12:15.568401 | orchestrator | 2026-03-07 01:12:15.568405 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-03-07 01:12:15.568409 | orchestrator | Saturday 07 March 2026 01:11:07 +0000 (0:00:07.501) 0:00:14.913 ******** 2026-03-07 01:12:15.568412 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-07 01:12:15.568416 | orchestrator | 2026-03-07 01:12:15.568420 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-03-07 01:12:15.568427 | orchestrator | Saturday 07 March 2026 01:11:11 +0000 (0:00:04.000) 0:00:18.913 ******** 2026-03-07 01:12:15.568431 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-07 01:12:15.568435 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-03-07 01:12:15.568438 | orchestrator | 2026-03-07 01:12:15.568442 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-03-07 01:12:15.568446 | orchestrator | Saturday 07 March 2026 01:11:15 +0000 (0:00:04.674) 0:00:23.588 ******** 2026-03-07 01:12:15.568450 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-07 01:12:15.568453 | orchestrator | 2026-03-07 01:12:15.568457 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-03-07 01:12:15.568461 | orchestrator | Saturday 07 March 2026 01:11:19 +0000 (0:00:03.957) 0:00:27.546 ******** 2026-03-07 01:12:15.568465 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-03-07 01:12:15.568468 | orchestrator | 2026-03-07 01:12:15.568472 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-07 01:12:15.568476 | orchestrator | Saturday 07 March 2026 01:11:23 +0000 (0:00:04.156) 0:00:31.702 ******** 2026-03-07 01:12:15.568480 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:12:15.568484 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:12:15.568487 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:12:15.568491 | orchestrator | 2026-03-07 01:12:15.568495 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-03-07 01:12:15.568499 | orchestrator | Saturday 07 March 2026 01:11:24 +0000 (0:00:00.343) 0:00:32.046 ******** 2026-03-07 01:12:15.568504 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-07 01:12:15.568510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-07 01:12:15.568520 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-07 01:12:15.568528 | orchestrator | 2026-03-07 01:12:15.568532 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-03-07 01:12:15.568536 | orchestrator | Saturday 07 March 2026 01:11:25 +0000 (0:00:00.874) 0:00:32.920 ******** 2026-03-07 01:12:15.568540 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:12:15.568544 | orchestrator | 2026-03-07 01:12:15.568547 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-03-07 01:12:15.568551 | orchestrator | Saturday 07 March 2026 01:11:25 +0000 (0:00:00.147) 0:00:33.068 ******** 2026-03-07 01:12:15.568555 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:12:15.568559 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:12:15.568563 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:12:15.568566 | orchestrator | 2026-03-07 01:12:15.568571 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-07 01:12:15.568576 | orchestrator | Saturday 07 March 2026 01:11:26 +0000 (0:00:00.840) 0:00:33.909 ******** 2026-03-07 01:12:15.568580 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:12:15.568584 | orchestrator | 2026-03-07 01:12:15.568589 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-03-07 01:12:15.568593 | orchestrator | Saturday 07 March 2026 01:11:26 +0000 (0:00:00.795) 0:00:34.705 ******** 2026-03-07 01:12:15.568598 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-07 01:12:15.568603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-07 01:12:15.568610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-07 01:12:15.568617 | orchestrator | 2026-03-07 01:12:15.568622 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-03-07 01:12:15.568628 | orchestrator | Saturday 07 March 2026 01:11:28 +0000 (0:00:01.646) 0:00:36.351 ******** 2026-03-07 01:12:15.568632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-07 01:12:15.568637 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:12:15.568645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-07 01:12:15.568651 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:12:15.568658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-07 01:12:15.568666 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:12:15.568672 | orchestrator | 2026-03-07 01:12:15.568679 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-03-07 01:12:15.568685 | orchestrator | Saturday 07 March 2026 01:11:29 +0000 (0:00:00.848) 0:00:37.200 ******** 2026-03-07 01:12:15.568700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-07 01:12:15.568707 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:12:15.568717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-07 01:12:15.568723 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:12:15.568730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-07 01:12:15.568737 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:12:15.568745 | orchestrator | 2026-03-07 01:12:15.568753 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-03-07 01:12:15.568760 | orchestrator | Saturday 07 March 2026 01:11:30 +0000 (0:00:00.757) 0:00:37.957 ******** 2026-03-07 01:12:15.568767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-07 01:12:15.568778 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-07 01:12:15.568792 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-07 01:12:15.568799 | orchestrator | 2026-03-07 01:12:15.568806 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-03-07 01:12:15.568812 | orchestrator | Saturday 07 March 2026 01:11:31 +0000 (0:00:01.660) 0:00:39.618 ******** 2026-03-07 01:12:15.568818 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-07 01:12:15.568823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-07 01:12:15.568831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-07 01:12:15.568836 | orchestrator | 2026-03-07 01:12:15.568840 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-03-07 01:12:15.568844 | orchestrator | Saturday 07 March 2026 01:11:35 +0000 (0:00:03.858) 0:00:43.477 ******** 2026-03-07 01:12:15.568849 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-07 01:12:15.568855 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-07 01:12:15.568860 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-07 01:12:15.568864 | orchestrator | 2026-03-07 01:12:15.568876 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-03-07 01:12:15.568887 | orchestrator | Saturday 07 March 2026 01:11:37 +0000 (0:00:02.115) 0:00:45.592 ******** 2026-03-07 01:12:15.568893 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:12:15.568900 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:12:15.568906 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:12:15.568912 | orchestrator | 2026-03-07 01:12:15.568917 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-03-07 01:12:15.568923 | orchestrator | Saturday 07 March 2026 01:11:39 +0000 (0:00:02.091) 0:00:47.683 ******** 2026-03-07 01:12:15.568929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-07 01:12:15.568935 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:12:15.568942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-07 01:12:15.568952 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:12:15.568958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-07 01:12:15.568964 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:12:15.568998 | orchestrator | 2026-03-07 01:12:15.569004 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-03-07 01:12:15.569010 | orchestrator | Saturday 07 March 2026 01:11:41 +0000 (0:00:01.109) 0:00:48.792 ******** 2026-03-07 01:12:15.569026 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-07 01:12:15.569033 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-07 01:12:15.569039 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-07 01:12:15.569051 | orchestrator | 2026-03-07 01:12:15.569058 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-03-07 01:12:15.569064 | orchestrator | Saturday 07 March 2026 01:11:43 +0000 (0:00:02.202) 0:00:50.994 ******** 2026-03-07 01:12:15.569070 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:12:15.569074 | orchestrator | 2026-03-07 01:12:15.569078 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-03-07 01:12:15.569082 | orchestrator | Saturday 07 March 2026 01:11:46 +0000 (0:00:03.616) 0:00:54.611 ******** 2026-03-07 01:12:15.569086 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:12:15.569089 | orchestrator | 2026-03-07 01:12:15.569093 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-03-07 01:12:15.569097 | orchestrator | Saturday 07 March 2026 01:11:49 +0000 (0:00:02.613) 0:00:57.224 ******** 2026-03-07 01:12:15.569101 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:12:15.569104 | orchestrator | 2026-03-07 01:12:15.569108 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-07 01:12:15.569112 | orchestrator | Saturday 07 March 2026 01:12:05 +0000 (0:00:15.770) 0:01:12.995 ******** 2026-03-07 01:12:15.569162 | orchestrator | 2026-03-07 01:12:15.569166 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-07 01:12:15.569170 | orchestrator | Saturday 07 March 2026 01:12:05 +0000 (0:00:00.214) 0:01:13.210 ******** 2026-03-07 01:12:15.569174 | orchestrator | 2026-03-07 01:12:15.569177 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-07 01:12:15.569181 | orchestrator | Saturday 07 March 2026 01:12:05 +0000 (0:00:00.188) 0:01:13.398 ******** 2026-03-07 01:12:15.569188 | orchestrator | 2026-03-07 01:12:15.569194 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-03-07 01:12:15.569200 | orchestrator | Saturday 07 March 2026 01:12:05 +0000 (0:00:00.164) 0:01:13.563 ******** 2026-03-07 01:12:15.569206 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:12:15.569212 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:12:15.569218 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:12:15.569224 | orchestrator | 2026-03-07 01:12:15.569231 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 01:12:15.569237 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-07 01:12:15.569244 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-07 01:12:15.569255 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-07 01:12:15.569261 | orchestrator | 2026-03-07 01:12:15.569265 | orchestrator | 2026-03-07 01:12:15.569271 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 01:12:15.569275 | orchestrator | Saturday 07 March 2026 01:12:12 +0000 (0:00:06.880) 0:01:20.444 ******** 2026-03-07 01:12:15.569279 | orchestrator | =============================================================================== 2026-03-07 01:12:15.569283 | orchestrator | placement : Running placement bootstrap container ---------------------- 15.77s 2026-03-07 01:12:15.569287 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 7.50s 2026-03-07 01:12:15.569291 | orchestrator | placement : Restart placement-api container ----------------------------- 6.88s 2026-03-07 01:12:15.569294 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.67s 2026-03-07 01:12:15.569302 | orchestrator | service-ks-register : placement | Creating services --------------------- 4.20s 2026-03-07 01:12:15.569305 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.16s 2026-03-07 01:12:15.569309 | orchestrator | service-ks-register : placement | Creating projects --------------------- 4.00s 2026-03-07 01:12:15.569313 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.96s 2026-03-07 01:12:15.569317 | orchestrator | placement : Copying over placement.conf --------------------------------- 3.86s 2026-03-07 01:12:15.569321 | orchestrator | placement : Creating placement databases -------------------------------- 3.62s 2026-03-07 01:12:15.569324 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.61s 2026-03-07 01:12:15.569328 | orchestrator | placement : Check placement containers ---------------------------------- 2.20s 2026-03-07 01:12:15.569332 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 2.12s 2026-03-07 01:12:15.569336 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 2.09s 2026-03-07 01:12:15.569340 | orchestrator | placement : Copying over config.json files for services ----------------- 1.66s 2026-03-07 01:12:15.569347 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.65s 2026-03-07 01:12:15.569352 | orchestrator | placement : include_tasks ----------------------------------------------- 1.35s 2026-03-07 01:12:15.569363 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.22s 2026-03-07 01:12:15.569368 | orchestrator | placement : Copying over existing policy file --------------------------- 1.11s 2026-03-07 01:12:15.569374 | orchestrator | placement : Ensuring config directories exist --------------------------- 0.87s 2026-03-07 01:12:15.569464 | orchestrator | 2026-03-07 01:12:15 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:12:15.569475 | orchestrator | 2026-03-07 01:12:15 | INFO  | Task 2e1dd76f-d9d6-44a4-abb8-becf57ce4644 is in state STARTED 2026-03-07 01:12:15.569483 | orchestrator | 2026-03-07 01:12:15 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:12:18.618496 | orchestrator | 2026-03-07 01:12:18 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:12:18.620005 | orchestrator | 2026-03-07 01:12:18 | INFO  | Task c0f376aa-8f1e-4684-ba39-02d7ce345895 is in state STARTED 2026-03-07 01:12:18.621621 | orchestrator | 2026-03-07 01:12:18 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:12:18.623214 | orchestrator | 2026-03-07 01:12:18 | INFO  | Task 2e1dd76f-d9d6-44a4-abb8-becf57ce4644 is in state STARTED 2026-03-07 01:12:18.623240 | orchestrator | 2026-03-07 01:12:18 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:12:21.668699 | orchestrator | 2026-03-07 01:12:21 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:12:21.668756 | orchestrator | 2026-03-07 01:12:21 | INFO  | Task c0f376aa-8f1e-4684-ba39-02d7ce345895 is in state STARTED 2026-03-07 01:12:21.669880 | orchestrator | 2026-03-07 01:12:21 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:12:21.670767 | orchestrator | 2026-03-07 01:12:21 | INFO  | Task 2e1dd76f-d9d6-44a4-abb8-becf57ce4644 is in state STARTED 2026-03-07 01:12:21.671227 | orchestrator | 2026-03-07 01:12:21 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:12:24.720920 | orchestrator | 2026-03-07 01:12:24 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:12:24.722404 | orchestrator | 2026-03-07 01:12:24 | INFO  | Task c0f376aa-8f1e-4684-ba39-02d7ce345895 is in state STARTED 2026-03-07 01:12:24.723945 | orchestrator | 2026-03-07 01:12:24 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:12:24.725444 | orchestrator | 2026-03-07 01:12:24 | INFO  | Task 2e1dd76f-d9d6-44a4-abb8-becf57ce4644 is in state STARTED 2026-03-07 01:12:24.725680 | orchestrator | 2026-03-07 01:12:24 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:12:27.767936 | orchestrator | 2026-03-07 01:12:27 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:12:27.769887 | orchestrator | 2026-03-07 01:12:27 | INFO  | Task c0f376aa-8f1e-4684-ba39-02d7ce345895 is in state STARTED 2026-03-07 01:12:27.773162 | orchestrator | 2026-03-07 01:12:27 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:12:27.775822 | orchestrator | 2026-03-07 01:12:27 | INFO  | Task 2e1dd76f-d9d6-44a4-abb8-becf57ce4644 is in state STARTED 2026-03-07 01:12:27.775899 | orchestrator | 2026-03-07 01:12:27 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:12:30.808927 | orchestrator | 2026-03-07 01:12:30 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:12:30.810628 | orchestrator | 2026-03-07 01:12:30 | INFO  | Task c0f376aa-8f1e-4684-ba39-02d7ce345895 is in state STARTED 2026-03-07 01:12:30.811287 | orchestrator | 2026-03-07 01:12:30 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state STARTED 2026-03-07 01:12:30.813026 | orchestrator | 2026-03-07 01:12:30 | INFO  | Task 2e1dd76f-d9d6-44a4-abb8-becf57ce4644 is in state STARTED 2026-03-07 01:12:30.813082 | orchestrator | 2026-03-07 01:12:30 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:14:33.952806 | orchestrator | 2026-03-07 01:14:33 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:14:33.953611 | orchestrator | 2026-03-07 01:14:33 | INFO  | Task c0f376aa-8f1e-4684-ba39-02d7ce345895 is in state SUCCESS 2026-03-07 01:14:33.953642 | orchestrator | 2026-03-07 01:14:33.956612 | orchestrator | 2026-03-07 01:14:33.956660 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-07 01:14:33.956667 | orchestrator | 2026-03-07 01:14:33.956672 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-07 01:14:33.956677 | orchestrator | Saturday 07 March 2026 01:11:02 +0000 (0:00:00.644) 0:00:00.644 ******** 2026-03-07 01:14:33.956681 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:14:33.956687 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:14:33.956691 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:14:33.956695 | orchestrator | 2026-03-07 01:14:33.956699 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-07 01:14:33.956703 | orchestrator | Saturday 07 March 2026 01:11:02 +0000 (0:00:00.448) 0:00:01.092 ******** 2026-03-07 01:14:33.956707 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-03-07 01:14:33.956712 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-03-07 01:14:33.956715 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-03-07 01:14:33.956720 | orchestrator | 2026-03-07 01:14:33.956724 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-03-07 01:14:33.956728 | orchestrator | 2026-03-07 01:14:33.956732 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-07 01:14:33.956736 | orchestrator | Saturday 07 March 2026 01:11:03 +0000 (0:00:00.974) 0:00:02.067 ******** 2026-03-07 01:14:33.956740 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:14:33.956745 | orchestrator | 2026-03-07 01:14:33.956749 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-03-07 01:14:33.956752 | orchestrator | Saturday 07 March 2026 01:11:04 +0000 (0:00:00.933) 0:00:03.000 ******** 2026-03-07 01:14:33.956757 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-03-07 01:14:33.956760 | orchestrator | 2026-03-07 01:14:33.956764 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-03-07 01:14:33.956785 | orchestrator | Saturday 07 March 2026 01:11:09 +0000 (0:00:04.271) 0:00:07.271 ******** 2026-03-07 01:14:33.956789 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-03-07 01:14:33.956793 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-03-07 01:14:33.956797 | orchestrator | 2026-03-07 01:14:33.956801 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-03-07 01:14:33.956804 | orchestrator | Saturday 07 March 2026 01:11:16 +0000 (0:00:07.747) 0:00:15.019 ******** 2026-03-07 01:14:33.956808 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-07 01:14:33.956812 | orchestrator | 2026-03-07 01:14:33.956816 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-03-07 01:14:33.956820 | orchestrator | Saturday 07 March 2026 01:11:20 +0000 (0:00:04.039) 0:00:19.059 ******** 2026-03-07 01:14:33.956824 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-07 01:14:33.956828 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-03-07 01:14:33.956845 | orchestrator | 2026-03-07 01:14:33.956849 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-03-07 01:14:33.956853 | orchestrator | Saturday 07 March 2026 01:11:24 +0000 (0:00:04.012) 0:00:23.071 ******** 2026-03-07 01:14:33.956857 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-07 01:14:33.956861 | orchestrator | 2026-03-07 01:14:33.956865 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-03-07 01:14:33.956869 | orchestrator | Saturday 07 March 2026 01:11:28 +0000 (0:00:03.707) 0:00:26.778 ******** 2026-03-07 01:14:33.956873 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-03-07 01:14:33.956877 | orchestrator | 2026-03-07 01:14:33.956881 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-03-07 01:14:33.956885 | orchestrator | Saturday 07 March 2026 01:11:32 +0000 (0:00:04.366) 0:00:31.145 ******** 2026-03-07 01:14:33.956890 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:14:33.956894 | orchestrator | 2026-03-07 01:14:33.956897 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-03-07 01:14:33.956902 | orchestrator | Saturday 07 March 2026 01:11:36 +0000 (0:00:04.030) 0:00:35.176 ******** 2026-03-07 01:14:33.956906 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:14:33.956909 | orchestrator | 2026-03-07 01:14:33.956914 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-03-07 01:14:33.956918 | orchestrator | Saturday 07 March 2026 01:11:41 +0000 (0:00:04.638) 0:00:39.815 ******** 2026-03-07 01:14:33.956923 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:14:33.956962 | orchestrator | 2026-03-07 01:14:33.957024 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-03-07 01:14:33.957029 | orchestrator | Saturday 07 March 2026 01:11:45 +0000 (0:00:03.703) 0:00:43.519 ******** 2026-03-07 01:14:33.957062 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-07 01:14:33.957072 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-07 01:14:33.957083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-07 01:14:33.957088 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:14:33.957094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:14:33.957103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:14:33.957110 | orchestrator | 2026-03-07 01:14:33.957114 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-03-07 01:14:33.957118 | orchestrator | Saturday 07 March 2026 01:11:47 +0000 (0:00:02.435) 0:00:45.954 ******** 2026-03-07 01:14:33.957122 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:33.957125 | orchestrator | 2026-03-07 01:14:33.957129 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-03-07 01:14:33.957133 | orchestrator | Saturday 07 March 2026 01:11:47 +0000 (0:00:00.134) 0:00:46.089 ******** 2026-03-07 01:14:33.957137 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:33.957141 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:33.957145 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:33.957148 | orchestrator | 2026-03-07 01:14:33.957152 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-03-07 01:14:33.957156 | orchestrator | Saturday 07 March 2026 01:11:48 +0000 (0:00:00.663) 0:00:46.753 ******** 2026-03-07 01:14:33.957160 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-07 01:14:33.957164 | orchestrator | 2026-03-07 01:14:33.957167 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-03-07 01:14:33.957171 | orchestrator | Saturday 07 March 2026 01:11:49 +0000 (0:00:01.039) 0:00:47.792 ******** 2026-03-07 01:14:33.957175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-07 01:14:33.957179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-07 01:14:33.957184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-07 01:14:33.957196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:14:33.957200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:14:33.957204 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:14:33.957208 | orchestrator | 2026-03-07 01:14:33.957212 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-03-07 01:14:33.957216 | orchestrator | Saturday 07 March 2026 01:11:52 +0000 (0:00:02.993) 0:00:50.786 ******** 2026-03-07 01:14:33.957220 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:14:33.957223 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:14:33.957227 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:14:33.957231 | orchestrator | 2026-03-07 01:14:33.957235 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-07 01:14:33.957239 | orchestrator | Saturday 07 March 2026 01:11:52 +0000 (0:00:00.329) 0:00:51.116 ******** 2026-03-07 01:14:33.957243 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:14:33.957247 | orchestrator | 2026-03-07 01:14:33.957251 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-03-07 01:14:33.957255 | orchestrator | Saturday 07 March 2026 01:11:53 +0000 (0:00:00.853) 0:00:51.969 ******** 2026-03-07 01:14:33.957259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-07 01:14:33.957271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-07 01:14:33.957275 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-07 01:14:33.957279 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:14:33.957283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:14:33.957287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:14:33.957295 | orchestrator | 2026-03-07 01:14:33.957298 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-03-07 01:14:33.957302 | orchestrator | Saturday 07 March 2026 01:11:56 +0000 (0:00:02.623) 0:00:54.592 ******** 2026-03-07 01:14:33.957309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-07 01:14:33.957313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-07 01:14:33.957317 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:33.957322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-07 01:14:33.957326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-07 01:14:33.957333 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:33.957338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-07 01:14:33.957344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-07 01:14:33.957348 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:33.957352 | orchestrator | 2026-03-07 01:14:33.957356 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-03-07 01:14:33.957359 | orchestrator | Saturday 07 March 2026 01:11:57 +0000 (0:00:00.742) 0:00:55.334 ******** 2026-03-07 01:14:33.957363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-07 01:14:33.957367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-07 01:14:33.957371 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:33.957379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-07 01:14:33.957386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-07 01:14:33.957390 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:33.957394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-07 01:14:33.957398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-07 01:14:33.957402 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:33.957406 | orchestrator | 2026-03-07 01:14:33.957410 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-03-07 01:14:33.957413 | orchestrator | Saturday 07 March 2026 01:11:58 +0000 (0:00:01.319) 0:00:56.654 ******** 2026-03-07 01:14:33.957417 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-07 01:14:33.957425 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-07 01:14:33.957432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-07 01:14:33.957436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:14:33.957440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:14:33.957448 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:14:33.957452 | orchestrator | 2026-03-07 01:14:33.957456 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-03-07 01:14:33.957460 | orchestrator | Saturday 07 March 2026 01:12:00 +0000 (0:00:02.509) 0:00:59.164 ******** 2026-03-07 01:14:33.957464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-07 01:14:33.957472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-07 01:14:33.957476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-07 01:14:33.957480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:14:33.957487 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:14:33.957491 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:14:33.957495 | orchestrator | 2026-03-07 01:14:33.957499 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-03-07 01:14:33.957521 | orchestrator | Saturday 07 March 2026 01:12:09 +0000 (0:00:08.433) 0:01:07.597 ******** 2026-03-07 01:14:33.957525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-07 01:14:33.957529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-07 01:14:33.957536 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:33.957540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-07 01:14:33.957545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-07 01:14:33.957548 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:33.957556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-07 01:14:33.957560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-07 01:14:33.957564 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:33.957568 | orchestrator | 2026-03-07 01:14:33.957572 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-03-07 01:14:33.957576 | orchestrator | Saturday 07 March 2026 01:12:10 +0000 (0:00:00.967) 0:01:08.564 ******** 2026-03-07 01:14:33.957580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-07 01:14:33.957587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-07 01:14:33.957592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-07 01:14:33.957598 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:14:33.957602 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:14:33.957613 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:14:33.957617 | orchestrator | 2026-03-07 01:14:33.957620 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-07 01:14:33.957624 | orchestrator | Saturday 07 March 2026 01:12:13 +0000 (0:00:02.923) 0:01:11.488 ******** 2026-03-07 01:14:33.957628 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:33.957632 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:33.957636 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:33.957640 | orchestrator | 2026-03-07 01:14:33.957643 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-03-07 01:14:33.957647 | orchestrator | Saturday 07 March 2026 01:12:13 +0000 (0:00:00.362) 0:01:11.851 ******** 2026-03-07 01:14:33.957651 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:14:33.957655 | orchestrator | 2026-03-07 01:14:33.957659 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-03-07 01:14:33.957663 | orchestrator | Saturday 07 March 2026 01:12:15 +0000 (0:00:01.957) 0:01:13.809 ******** 2026-03-07 01:14:33.957666 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:14:33.957670 | orchestrator | 2026-03-07 01:14:33.957674 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-03-07 01:14:33.957678 | orchestrator | Saturday 07 March 2026 01:12:17 +0000 (0:00:02.275) 0:01:16.084 ******** 2026-03-07 01:14:33.957682 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:14:33.957686 | orchestrator | 2026-03-07 01:14:33.957690 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-07 01:14:33.957694 | orchestrator | Saturday 07 March 2026 01:12:33 +0000 (0:00:16.166) 0:01:32.250 ******** 2026-03-07 01:14:33.957697 | orchestrator | 2026-03-07 01:14:33.957701 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-07 01:14:33.957705 | orchestrator | Saturday 07 March 2026 01:12:34 +0000 (0:00:00.204) 0:01:32.455 ******** 2026-03-07 01:14:33.957709 | orchestrator | 2026-03-07 01:14:33.957713 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-07 01:14:33.957717 | orchestrator | Saturday 07 March 2026 01:12:34 +0000 (0:00:00.170) 0:01:32.625 ******** 2026-03-07 01:14:33.957720 | orchestrator | 2026-03-07 01:14:33.957724 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-03-07 01:14:33.957728 | orchestrator | Saturday 07 March 2026 01:12:34 +0000 (0:00:00.169) 0:01:32.795 ******** 2026-03-07 01:14:33.957732 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:14:33.957736 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:14:33.957740 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:14:33.957743 | orchestrator | 2026-03-07 01:14:33.957747 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-03-07 01:14:33.957751 | orchestrator | Saturday 07 March 2026 01:12:53 +0000 (0:00:18.801) 0:01:51.596 ******** 2026-03-07 01:14:33.957755 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:14:33.957759 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:14:33.957762 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:14:33.957766 | orchestrator | 2026-03-07 01:14:33.957772 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 01:14:33.957783 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-07 01:14:33.957788 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-07 01:14:33.957791 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-07 01:14:33.957795 | orchestrator | 2026-03-07 01:14:33.957799 | orchestrator | 2026-03-07 01:14:33.957803 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 01:14:33.957807 | orchestrator | Saturday 07 March 2026 01:13:04 +0000 (0:00:11.197) 0:02:02.794 ******** 2026-03-07 01:14:33.957810 | orchestrator | =============================================================================== 2026-03-07 01:14:33.957814 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 18.80s 2026-03-07 01:14:33.957818 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 16.17s 2026-03-07 01:14:33.957822 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 11.20s 2026-03-07 01:14:33.957826 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 8.43s 2026-03-07 01:14:33.957829 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 7.75s 2026-03-07 01:14:33.957833 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.64s 2026-03-07 01:14:33.957837 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.37s 2026-03-07 01:14:33.957841 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 4.27s 2026-03-07 01:14:33.957845 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 4.04s 2026-03-07 01:14:33.957848 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 4.03s 2026-03-07 01:14:33.957852 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.01s 2026-03-07 01:14:33.957856 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.71s 2026-03-07 01:14:33.957860 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.70s 2026-03-07 01:14:33.957864 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.99s 2026-03-07 01:14:33.957867 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.92s 2026-03-07 01:14:33.957871 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.62s 2026-03-07 01:14:33.957875 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.51s 2026-03-07 01:14:33.957879 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 2.44s 2026-03-07 01:14:33.957883 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.28s 2026-03-07 01:14:33.957887 | orchestrator | magnum : Creating Magnum database --------------------------------------- 1.96s 2026-03-07 01:14:33.957985 | orchestrator | 2026-03-07 01:14:33 | INFO  | Task 3c86d0d3-ae1c-449b-926d-602d07e70e7c is in state SUCCESS 2026-03-07 01:14:33.959152 | orchestrator | 2026-03-07 01:14:33.959197 | orchestrator | 2026-03-07 01:14:33.959203 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-07 01:14:33.959208 | orchestrator | 2026-03-07 01:14:33.959212 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-07 01:14:33.959217 | orchestrator | Saturday 07 March 2026 01:06:48 +0000 (0:00:00.280) 0:00:00.280 ******** 2026-03-07 01:14:33.959221 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:14:33.959225 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:14:33.959229 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:14:33.959233 | orchestrator | ok: [testbed-node-3] 2026-03-07 01:14:33.959237 | orchestrator | ok: [testbed-node-4] 2026-03-07 01:14:33.959241 | orchestrator | ok: [testbed-node-5] 2026-03-07 01:14:33.959255 | orchestrator | 2026-03-07 01:14:33.959259 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-07 01:14:33.959263 | orchestrator | Saturday 07 March 2026 01:06:49 +0000 (0:00:00.724) 0:00:01.005 ******** 2026-03-07 01:14:33.959267 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-03-07 01:14:33.959272 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-03-07 01:14:33.959276 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-03-07 01:14:33.959280 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-03-07 01:14:33.959284 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-03-07 01:14:33.959288 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-03-07 01:14:33.959291 | orchestrator | 2026-03-07 01:14:33.959295 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-03-07 01:14:33.959299 | orchestrator | 2026-03-07 01:14:33.959303 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-07 01:14:33.959307 | orchestrator | Saturday 07 March 2026 01:06:50 +0000 (0:00:00.563) 0:00:01.568 ******** 2026-03-07 01:14:33.959312 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 01:14:33.959317 | orchestrator | 2026-03-07 01:14:33.959321 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-03-07 01:14:33.959324 | orchestrator | Saturday 07 March 2026 01:06:51 +0000 (0:00:01.337) 0:00:02.905 ******** 2026-03-07 01:14:33.959328 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:14:33.959332 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:14:33.959336 | orchestrator | ok: [testbed-node-3] 2026-03-07 01:14:33.959340 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:14:33.959343 | orchestrator | ok: [testbed-node-4] 2026-03-07 01:14:33.959347 | orchestrator | ok: [testbed-node-5] 2026-03-07 01:14:33.959351 | orchestrator | 2026-03-07 01:14:33.959355 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-03-07 01:14:33.959358 | orchestrator | Saturday 07 March 2026 01:06:52 +0000 (0:00:01.275) 0:00:04.180 ******** 2026-03-07 01:14:33.959362 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:14:33.959366 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:14:33.959370 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:14:33.959373 | orchestrator | ok: [testbed-node-3] 2026-03-07 01:14:33.959377 | orchestrator | ok: [testbed-node-4] 2026-03-07 01:14:33.959397 | orchestrator | ok: [testbed-node-5] 2026-03-07 01:14:33.959401 | orchestrator | 2026-03-07 01:14:33.959404 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-03-07 01:14:33.959408 | orchestrator | Saturday 07 March 2026 01:06:53 +0000 (0:00:01.335) 0:00:05.516 ******** 2026-03-07 01:14:33.959412 | orchestrator | ok: [testbed-node-0] => { 2026-03-07 01:14:33.959416 | orchestrator |  "changed": false, 2026-03-07 01:14:33.959431 | orchestrator |  "msg": "All assertions passed" 2026-03-07 01:14:33.959435 | orchestrator | } 2026-03-07 01:14:33.959439 | orchestrator | ok: [testbed-node-1] => { 2026-03-07 01:14:33.959443 | orchestrator |  "changed": false, 2026-03-07 01:14:33.959447 | orchestrator |  "msg": "All assertions passed" 2026-03-07 01:14:33.959450 | orchestrator | } 2026-03-07 01:14:33.959462 | orchestrator | ok: [testbed-node-2] => { 2026-03-07 01:14:33.959466 | orchestrator |  "changed": false, 2026-03-07 01:14:33.959504 | orchestrator |  "msg": "All assertions passed" 2026-03-07 01:14:33.959509 | orchestrator | } 2026-03-07 01:14:33.959512 | orchestrator | ok: [testbed-node-3] => { 2026-03-07 01:14:33.959516 | orchestrator |  "changed": false, 2026-03-07 01:14:33.959520 | orchestrator |  "msg": "All assertions passed" 2026-03-07 01:14:33.959524 | orchestrator | } 2026-03-07 01:14:33.959527 | orchestrator | ok: [testbed-node-4] => { 2026-03-07 01:14:33.959531 | orchestrator |  "changed": false, 2026-03-07 01:14:33.959535 | orchestrator |  "msg": "All assertions passed" 2026-03-07 01:14:33.959539 | orchestrator | } 2026-03-07 01:14:33.959547 | orchestrator | ok: [testbed-node-5] => { 2026-03-07 01:14:33.959559 | orchestrator |  "changed": false, 2026-03-07 01:14:33.959563 | orchestrator |  "msg": "All assertions passed" 2026-03-07 01:14:33.959573 | orchestrator | } 2026-03-07 01:14:33.959577 | orchestrator | 2026-03-07 01:14:33.959581 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-03-07 01:14:33.959584 | orchestrator | Saturday 07 March 2026 01:06:55 +0000 (0:00:01.899) 0:00:07.416 ******** 2026-03-07 01:14:33.959588 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:33.959592 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:33.959596 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:33.959600 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:14:33.959603 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:14:33.959607 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:14:33.959611 | orchestrator | 2026-03-07 01:14:33.959615 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-03-07 01:14:33.959618 | orchestrator | Saturday 07 March 2026 01:06:56 +0000 (0:00:01.053) 0:00:08.470 ******** 2026-03-07 01:14:33.959622 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-03-07 01:14:33.959626 | orchestrator | 2026-03-07 01:14:33.959630 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-03-07 01:14:33.959634 | orchestrator | Saturday 07 March 2026 01:07:00 +0000 (0:00:04.037) 0:00:12.508 ******** 2026-03-07 01:14:33.959637 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-03-07 01:14:33.959642 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-03-07 01:14:33.959646 | orchestrator | 2026-03-07 01:14:33.959659 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-03-07 01:14:33.959663 | orchestrator | Saturday 07 March 2026 01:07:08 +0000 (0:00:07.467) 0:00:19.975 ******** 2026-03-07 01:14:33.959666 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-07 01:14:33.959670 | orchestrator | 2026-03-07 01:14:33.959674 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-03-07 01:14:33.959678 | orchestrator | Saturday 07 March 2026 01:07:11 +0000 (0:00:03.335) 0:00:23.311 ******** 2026-03-07 01:14:33.959682 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-07 01:14:33.959686 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-03-07 01:14:33.959689 | orchestrator | 2026-03-07 01:14:33.959693 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-03-07 01:14:33.959697 | orchestrator | Saturday 07 March 2026 01:07:16 +0000 (0:00:04.595) 0:00:27.907 ******** 2026-03-07 01:14:33.959701 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-07 01:14:33.959705 | orchestrator | 2026-03-07 01:14:33.959709 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-03-07 01:14:33.959713 | orchestrator | Saturday 07 March 2026 01:07:20 +0000 (0:00:03.991) 0:00:31.898 ******** 2026-03-07 01:14:33.959717 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-03-07 01:14:33.959721 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-03-07 01:14:33.959724 | orchestrator | 2026-03-07 01:14:33.959728 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-07 01:14:33.959732 | orchestrator | Saturday 07 March 2026 01:07:28 +0000 (0:00:08.082) 0:00:39.981 ******** 2026-03-07 01:14:33.959736 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:33.959740 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:33.959744 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:33.959747 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:14:33.959752 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:14:33.959756 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:14:33.959761 | orchestrator | 2026-03-07 01:14:33.959765 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-03-07 01:14:33.959773 | orchestrator | Saturday 07 March 2026 01:07:29 +0000 (0:00:00.905) 0:00:40.887 ******** 2026-03-07 01:14:33.959778 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:33.959782 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:33.959794 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:33.959799 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:14:33.959803 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:14:33.959807 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:14:33.959811 | orchestrator | 2026-03-07 01:14:33.959816 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-03-07 01:14:33.959820 | orchestrator | Saturday 07 March 2026 01:07:31 +0000 (0:00:02.418) 0:00:43.305 ******** 2026-03-07 01:14:33.959824 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:14:33.959829 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:14:33.959833 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:14:33.959838 | orchestrator | ok: [testbed-node-4] 2026-03-07 01:14:33.959842 | orchestrator | ok: [testbed-node-3] 2026-03-07 01:14:33.959846 | orchestrator | ok: [testbed-node-5] 2026-03-07 01:14:33.959851 | orchestrator | 2026-03-07 01:14:33.959855 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-07 01:14:33.959859 | orchestrator | Saturday 07 March 2026 01:07:34 +0000 (0:00:02.227) 0:00:45.533 ******** 2026-03-07 01:14:33.959863 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:33.959867 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:33.959872 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:33.959876 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:14:33.959880 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:14:33.959884 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:14:33.959889 | orchestrator | 2026-03-07 01:14:33.959893 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-03-07 01:14:33.959897 | orchestrator | Saturday 07 March 2026 01:07:36 +0000 (0:00:02.302) 0:00:47.835 ******** 2026-03-07 01:14:33.959904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-07 01:14:33.959914 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-07 01:14:33.959920 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-07 01:14:33.959967 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-07 01:14:33.959975 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-07 01:14:33.959980 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-07 01:14:33.959985 | orchestrator | 2026-03-07 01:14:33.959989 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-03-07 01:14:33.959994 | orchestrator | Saturday 07 March 2026 01:07:39 +0000 (0:00:03.470) 0:00:51.305 ******** 2026-03-07 01:14:33.959999 | orchestrator | [WARNING]: Skipped 2026-03-07 01:14:33.960004 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-03-07 01:14:33.960009 | orchestrator | due to this access issue: 2026-03-07 01:14:33.960013 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-03-07 01:14:33.960016 | orchestrator | a directory 2026-03-07 01:14:33.960020 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-07 01:14:33.960024 | orchestrator | 2026-03-07 01:14:33.960030 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-07 01:14:33.960034 | orchestrator | Saturday 07 March 2026 01:07:40 +0000 (0:00:01.063) 0:00:52.369 ******** 2026-03-07 01:14:33.960043 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 01:14:33.960049 | orchestrator | 2026-03-07 01:14:33.960052 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-03-07 01:14:33.960056 | orchestrator | Saturday 07 March 2026 01:07:42 +0000 (0:00:01.640) 0:00:54.009 ******** 2026-03-07 01:14:33.960060 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-07 01:14:33.960065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-07 01:14:33.960069 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-07 01:14:33.960073 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-07 01:14:33.960080 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-07 01:14:33.960088 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-07 01:14:33.960092 | orchestrator | 2026-03-07 01:14:33.960096 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-03-07 01:14:33.960100 | orchestrator | Saturday 07 March 2026 01:07:47 +0000 (0:00:04.560) 0:00:58.570 ******** 2026-03-07 01:14:33.960104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-07 01:14:33.960108 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:33.960112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-07 01:14:33.960116 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:33.960122 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-07 01:14:33.960130 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:14:33.960134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-07 01:14:33.960138 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:33.960142 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-07 01:14:33.960146 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:14:33.960150 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-07 01:14:33.960154 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:14:33.960158 | orchestrator | 2026-03-07 01:14:33.960162 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-03-07 01:14:33.960166 | orchestrator | Saturday 07 March 2026 01:07:52 +0000 (0:00:04.961) 0:01:03.532 ******** 2026-03-07 01:14:33.960169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-07 01:14:33.960177 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:33.960184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-07 01:14:33.960188 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:33.960192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-07 01:14:33.960196 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:33.960200 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-07 01:14:33.960204 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-07 01:14:33.960208 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:14:33.960215 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:14:33.960219 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-07 01:14:33.960223 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:14:33.960227 | orchestrator | 2026-03-07 01:14:33.960231 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-03-07 01:14:33.960236 | orchestrator | Saturday 07 March 2026 01:07:56 +0000 (0:00:04.293) 0:01:07.825 ******** 2026-03-07 01:14:33.960240 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:33.960244 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:33.960248 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:14:33.960252 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:14:33.960255 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:33.960259 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:14:33.960263 | orchestrator | 2026-03-07 01:14:33.960267 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-03-07 01:14:33.960270 | orchestrator | Saturday 07 March 2026 01:07:59 +0000 (0:00:03.547) 0:01:11.372 ******** 2026-03-07 01:14:33.960274 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:33.960278 | orchestrator | 2026-03-07 01:14:33.960282 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-03-07 01:14:33.960286 | orchestrator | Saturday 07 March 2026 01:08:00 +0000 (0:00:00.181) 0:01:11.553 ******** 2026-03-07 01:14:33.960289 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:33.960293 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:33.960297 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:33.960301 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:14:33.960304 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:14:33.960308 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:14:33.960312 | orchestrator | 2026-03-07 01:14:33.960316 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-03-07 01:14:33.960320 | orchestrator | Saturday 07 March 2026 01:08:00 +0000 (0:00:00.902) 0:01:12.455 ******** 2026-03-07 01:14:33.960324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-07 01:14:33.960328 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:33.960332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-07 01:14:33.960340 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:33.960344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-07 01:14:33.960348 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:33.960492 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-07 01:14:33.960499 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:14:33.960503 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-07 01:14:33.960507 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:14:33.960511 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-07 01:14:33.960521 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:14:33.960524 | orchestrator | 2026-03-07 01:14:33.960528 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-03-07 01:14:33.960532 | orchestrator | Saturday 07 March 2026 01:08:05 +0000 (0:00:04.846) 0:01:17.302 ******** 2026-03-07 01:14:33.960536 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-07 01:14:33.960540 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-07 01:14:33.960548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-07 01:14:33.960552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-07 01:14:33.960556 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-07 01:14:33.960566 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-07 01:14:33.960570 | orchestrator | 2026-03-07 01:14:33.960574 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-03-07 01:14:33.960578 | orchestrator | Saturday 07 March 2026 01:08:12 +0000 (0:00:06.998) 0:01:24.301 ******** 2026-03-07 01:14:33.960584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-07 01:14:33.960588 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-07 01:14:33.960593 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-07 01:14:33.960600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-07 01:14:33.960604 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-07 01:14:33.960610 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-07 01:14:33.960614 | orchestrator | 2026-03-07 01:14:33.960618 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-03-07 01:14:33.960622 | orchestrator | Saturday 07 March 2026 01:08:25 +0000 (0:00:12.257) 0:01:36.558 ******** 2026-03-07 01:14:33.960627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-07 01:14:33.960631 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:33.960639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-07 01:14:33.960643 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:33.960647 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-07 01:14:33.960651 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:14:33.960656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-07 01:14:33.960659 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:33.960666 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-07 01:14:33.960670 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-07 01:14:33.960681 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:14:33.960685 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:14:33.960688 | orchestrator | 2026-03-07 01:14:33.960692 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-03-07 01:14:33.960696 | orchestrator | Saturday 07 March 2026 01:08:31 +0000 (0:00:06.575) 0:01:43.133 ******** 2026-03-07 01:14:33.960700 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:14:33.960704 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:14:33.960708 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:14:33.960711 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:14:33.960715 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:14:33.960719 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:14:33.960723 | orchestrator | 2026-03-07 01:14:33.960727 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-03-07 01:14:33.960730 | orchestrator | Saturday 07 March 2026 01:08:37 +0000 (0:00:05.488) 0:01:48.622 ******** 2026-03-07 01:14:33.960734 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-07 01:14:33.960738 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:14:33.960742 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-07 01:14:33.960746 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:14:33.960755 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-07 01:14:33.960759 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:14:33.960763 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-07 01:14:33.960771 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-07 01:14:33.960776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-07 01:14:33.960779 | orchestrator | 2026-03-07 01:14:33.960783 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-03-07 01:14:33.960787 | orchestrator | Saturday 07 March 2026 01:08:43 +0000 (0:00:06.735) 0:01:55.358 ******** 2026-03-07 01:14:33.960791 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:33.960795 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:14:33.960798 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:33.960804 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:33.960811 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:14:33.960817 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:14:33.960823 | orchestrator | 2026-03-07 01:14:33.960829 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-03-07 01:14:33.960835 | orchestrator | Saturday 07 March 2026 01:08:47 +0000 (0:00:03.905) 0:01:59.264 ******** 2026-03-07 01:14:33.960841 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:33.960847 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:33.960854 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:14:33.960858 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:14:33.960861 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:14:33.960865 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:33.960869 | orchestrator | 2026-03-07 01:14:33.960873 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-03-07 01:14:33.960880 | orchestrator | Saturday 07 March 2026 01:08:52 +0000 (0:00:04.700) 0:02:03.964 ******** 2026-03-07 01:14:33.960887 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:33.960891 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:33.960897 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:14:33.960903 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:33.960910 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:14:33.960916 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:14:33.960922 | orchestrator | 2026-03-07 01:14:33.960964 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-03-07 01:14:33.960972 | orchestrator | Saturday 07 March 2026 01:08:56 +0000 (0:00:04.437) 0:02:08.402 ******** 2026-03-07 01:14:33.960979 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:33.960986 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:33.960992 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:33.961043 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:14:33.961049 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:14:33.961056 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:14:33.961062 | orchestrator | 2026-03-07 01:14:33.961070 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-03-07 01:14:33.961079 | orchestrator | Saturday 07 March 2026 01:09:00 +0000 (0:00:04.112) 0:02:12.515 ******** 2026-03-07 01:14:33.961086 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:33.961093 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:33.961099 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:33.961105 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:14:33.961111 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:14:33.961119 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:14:33.961125 | orchestrator | 2026-03-07 01:14:33.961131 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-03-07 01:14:33.961138 | orchestrator | Saturday 07 March 2026 01:09:04 +0000 (0:00:03.375) 0:02:15.891 ******** 2026-03-07 01:14:33.961143 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:33.961147 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:33.961152 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:14:33.961156 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:33.961161 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:14:33.961165 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:14:33.961170 | orchestrator | 2026-03-07 01:14:33.961175 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-03-07 01:14:33.961179 | orchestrator | Saturday 07 March 2026 01:09:08 +0000 (0:00:03.775) 0:02:19.667 ******** 2026-03-07 01:14:33.961184 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-07 01:14:33.961188 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:33.961193 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-07 01:14:33.961198 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:33.961202 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-07 01:14:33.961207 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:33.961211 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-07 01:14:33.961215 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:14:33.961220 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-07 01:14:33.961225 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:14:33.961229 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-07 01:14:33.961233 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:14:33.961238 | orchestrator | 2026-03-07 01:14:33.961242 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-03-07 01:14:33.961256 | orchestrator | Saturday 07 March 2026 01:09:10 +0000 (0:00:02.715) 0:02:22.382 ******** 2026-03-07 01:14:33.961262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-07 01:14:33.961267 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:33.961277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-07 01:14:33.961282 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:33.961287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-07 01:14:33.961292 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:33.961297 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-07 01:14:33.961301 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:14:33.961306 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-07 01:14:33.961314 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:14:33.961319 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-07 01:14:33.961323 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:14:33.961328 | orchestrator | 2026-03-07 01:14:33.961332 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-03-07 01:14:33.961337 | orchestrator | Saturday 07 March 2026 01:09:14 +0000 (0:00:03.791) 0:02:26.174 ******** 2026-03-07 01:14:33.961524 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-07 01:14:33.961536 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:14:33.961541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-07 01:14:33.961546 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:33.961551 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-07 01:14:33.961561 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:14:33.961566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-07 01:14:33.961570 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:33.961577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-07 01:14:33.961582 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:33.961593 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-07 01:14:33.961597 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:14:33.961601 | orchestrator | 2026-03-07 01:14:33.961605 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-03-07 01:14:33.961609 | orchestrator | Saturday 07 March 2026 01:09:19 +0000 (0:00:04.860) 0:02:31.034 ******** 2026-03-07 01:14:33.961612 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:33.961616 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:33.961620 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:33.961624 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:14:33.961627 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:14:33.961631 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:14:33.961635 | orchestrator | 2026-03-07 01:14:33.961639 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-03-07 01:14:33.961646 | orchestrator | Saturday 07 March 2026 01:09:22 +0000 (0:00:02.799) 0:02:33.834 ******** 2026-03-07 01:14:33.961649 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:33.961653 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:33.961657 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:33.961660 | orchestrator | changed: [testbed-node-3] 2026-03-07 01:14:33.961664 | orchestrator | changed: [testbed-node-5] 2026-03-07 01:14:33.961668 | orchestrator | changed: [testbed-node-4] 2026-03-07 01:14:33.961671 | orchestrator | 2026-03-07 01:14:33.961675 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-03-07 01:14:33.961679 | orchestrator | Saturday 07 March 2026 01:09:26 +0000 (0:00:04.440) 0:02:38.274 ******** 2026-03-07 01:14:33.961683 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:33.961686 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:33.961690 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:33.961694 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:14:33.961698 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:14:33.961701 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:14:33.961705 | orchestrator | 2026-03-07 01:14:33.961709 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-03-07 01:14:33.961712 | orchestrator | Saturday 07 March 2026 01:09:29 +0000 (0:00:02.721) 0:02:40.996 ******** 2026-03-07 01:14:33.961716 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:33.961720 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:14:33.961723 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:33.961728 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:33.961731 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:14:33.961735 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:14:33.961738 | orchestrator | 2026-03-07 01:14:33.961742 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-03-07 01:14:33.961746 | orchestrator | Saturday 07 March 2026 01:09:32 +0000 (0:00:03.348) 0:02:44.345 ******** 2026-03-07 01:14:33.961750 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:33.961753 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:33.961757 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:33.961761 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:14:33.961764 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:14:33.961768 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:14:33.961772 | orchestrator | 2026-03-07 01:14:33.961776 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-03-07 01:14:33.961779 | orchestrator | Saturday 07 March 2026 01:09:37 +0000 (0:00:04.276) 0:02:48.622 ******** 2026-03-07 01:14:33.961783 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:33.961787 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:33.961791 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:33.961795 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:14:33.961798 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:14:33.961802 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:14:33.961806 | orchestrator | 2026-03-07 01:14:33.961810 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-03-07 01:14:33.961813 | orchestrator | Saturday 07 March 2026 01:09:42 +0000 (0:00:05.455) 0:02:54.078 ******** 2026-03-07 01:14:33.961817 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:33.961821 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:33.961825 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:33.961828 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:14:33.961832 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:14:33.961836 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:14:33.961839 | orchestrator | 2026-03-07 01:14:33.961843 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-03-07 01:14:33.961847 | orchestrator | Saturday 07 March 2026 01:09:46 +0000 (0:00:03.926) 0:02:58.004 ******** 2026-03-07 01:14:33.961851 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:33.961858 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:33.961861 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:33.961865 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:14:33.961869 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:14:33.961873 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:14:33.961877 | orchestrator | 2026-03-07 01:14:33.961881 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-03-07 01:14:33.961887 | orchestrator | Saturday 07 March 2026 01:09:51 +0000 (0:00:05.462) 0:03:03.466 ******** 2026-03-07 01:14:33.961891 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:33.961895 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:33.961899 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:33.961902 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:14:33.961906 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:14:33.961910 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:14:33.961914 | orchestrator | 2026-03-07 01:14:33.961917 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-03-07 01:14:33.961921 | orchestrator | Saturday 07 March 2026 01:09:56 +0000 (0:00:04.629) 0:03:08.096 ******** 2026-03-07 01:14:33.961938 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-07 01:14:33.961943 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:33.961947 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-07 01:14:33.961951 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:33.961954 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-07 01:14:33.961958 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:14:33.961962 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-07 01:14:33.961966 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:33.961970 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-07 01:14:33.961974 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-07 01:14:33.961978 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:14:33.961981 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:14:33.961985 | orchestrator | 2026-03-07 01:14:33.961989 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-03-07 01:14:33.961993 | orchestrator | Saturday 07 March 2026 01:10:02 +0000 (0:00:05.557) 0:03:13.654 ******** 2026-03-07 01:14:33.961997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-07 01:14:33.962001 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:33.962005 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-07 01:14:33.962067 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:14:33.962083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-07 01:14:33.962091 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:33.962097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-07 01:14:33.962104 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:33.962111 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-07 01:14:33.962118 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:14:33.962124 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-07 01:14:33.962138 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:14:33.962142 | orchestrator | 2026-03-07 01:14:33.962146 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-03-07 01:14:33.962150 | orchestrator | Saturday 07 March 2026 01:10:06 +0000 (0:00:04.524) 0:03:18.179 ******** 2026-03-07 01:14:33.962154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-07 01:14:33.962162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-07 01:14:33.962167 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-07 01:14:33.962171 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-07 01:14:33.962176 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-07 01:14:33.962185 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-07 01:14:33.962189 | orchestrator | 2026-03-07 01:14:33.962193 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-07 01:14:33.962196 | orchestrator | Saturday 07 March 2026 01:10:12 +0000 (0:00:06.217) 0:03:24.397 ******** 2026-03-07 01:14:33.962200 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:33.962204 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:33.962208 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:33.962212 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:14:33.962215 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:14:33.962221 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:14:33.962225 | orchestrator | 2026-03-07 01:14:33.962229 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-03-07 01:14:33.962233 | orchestrator | Saturday 07 March 2026 01:10:14 +0000 (0:00:01.222) 0:03:25.620 ******** 2026-03-07 01:14:33.962237 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:14:33.962241 | orchestrator | 2026-03-07 01:14:33.962244 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-03-07 01:14:33.962249 | orchestrator | Saturday 07 March 2026 01:10:16 +0000 (0:00:02.540) 0:03:28.161 ******** 2026-03-07 01:14:33.962253 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:14:33.962257 | orchestrator | 2026-03-07 01:14:33.962261 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-03-07 01:14:33.962265 | orchestrator | Saturday 07 March 2026 01:10:19 +0000 (0:00:02.766) 0:03:30.927 ******** 2026-03-07 01:14:33.962269 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:14:33.962273 | orchestrator | 2026-03-07 01:14:33.962277 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-07 01:14:33.962281 | orchestrator | Saturday 07 March 2026 01:11:05 +0000 (0:00:45.822) 0:04:16.749 ******** 2026-03-07 01:14:33.962286 | orchestrator | 2026-03-07 01:14:33.962292 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-07 01:14:33.962298 | orchestrator | Saturday 07 March 2026 01:11:05 +0000 (0:00:00.077) 0:04:16.827 ******** 2026-03-07 01:14:33.962304 | orchestrator | 2026-03-07 01:14:33.962310 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-07 01:14:33.962316 | orchestrator | Saturday 07 March 2026 01:11:05 +0000 (0:00:00.374) 0:04:17.202 ******** 2026-03-07 01:14:33.962323 | orchestrator | 2026-03-07 01:14:33.962329 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-07 01:14:33.962336 | orchestrator | Saturday 07 March 2026 01:11:05 +0000 (0:00:00.072) 0:04:17.274 ******** 2026-03-07 01:14:33.962345 | orchestrator | 2026-03-07 01:14:33.962349 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-07 01:14:33.962352 | orchestrator | Saturday 07 March 2026 01:11:05 +0000 (0:00:00.077) 0:04:17.352 ******** 2026-03-07 01:14:33.962356 | orchestrator | 2026-03-07 01:14:33.962360 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-07 01:14:33.962363 | orchestrator | Saturday 07 March 2026 01:11:05 +0000 (0:00:00.069) 0:04:17.421 ******** 2026-03-07 01:14:33.962367 | orchestrator | 2026-03-07 01:14:33.962371 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-03-07 01:14:33.962375 | orchestrator | Saturday 07 March 2026 01:11:05 +0000 (0:00:00.075) 0:04:17.496 ******** 2026-03-07 01:14:33.962379 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:14:33.962382 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:14:33.962386 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:14:33.962390 | orchestrator | 2026-03-07 01:14:33.962394 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-03-07 01:14:33.962398 | orchestrator | Saturday 07 March 2026 01:11:34 +0000 (0:00:28.537) 0:04:46.034 ******** 2026-03-07 01:14:33.962402 | orchestrator | changed: [testbed-node-5] 2026-03-07 01:14:33.962405 | orchestrator | changed: [testbed-node-4] 2026-03-07 01:14:33.962409 | orchestrator | changed: [testbed-node-3] 2026-03-07 01:14:33.962413 | orchestrator | 2026-03-07 01:14:33.962417 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 01:14:33.962421 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-07 01:14:33.962425 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-07 01:14:33.962429 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-07 01:14:33.962433 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-07 01:14:33.962437 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-07 01:14:33.962440 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-07 01:14:33.962444 | orchestrator | 2026-03-07 01:14:33.962448 | orchestrator | 2026-03-07 01:14:33.962451 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 01:14:33.962456 | orchestrator | Saturday 07 March 2026 01:12:31 +0000 (0:00:56.761) 0:05:42.796 ******** 2026-03-07 01:14:33.962459 | orchestrator | =============================================================================== 2026-03-07 01:14:33.962463 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 56.76s 2026-03-07 01:14:33.962467 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 45.82s 2026-03-07 01:14:33.962471 | orchestrator | neutron : Restart neutron-server container ----------------------------- 28.54s 2026-03-07 01:14:33.962475 | orchestrator | neutron : Copying over neutron.conf ------------------------------------ 12.26s 2026-03-07 01:14:33.962478 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.08s 2026-03-07 01:14:33.962482 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 7.47s 2026-03-07 01:14:33.962486 | orchestrator | neutron : Copying over config.json files for services ------------------- 7.00s 2026-03-07 01:14:33.962489 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 6.74s 2026-03-07 01:14:33.962496 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 6.58s 2026-03-07 01:14:33.962500 | orchestrator | neutron : Check neutron containers -------------------------------------- 6.22s 2026-03-07 01:14:33.962508 | orchestrator | neutron : Copying over neutron-tls-proxy.cfg ---------------------------- 5.56s 2026-03-07 01:14:33.962512 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 5.49s 2026-03-07 01:14:33.962516 | orchestrator | neutron : Copy neutron-l3-agent-wrapper script -------------------------- 5.46s 2026-03-07 01:14:33.962520 | orchestrator | neutron : Copying over ovn_agent.ini ------------------------------------ 5.46s 2026-03-07 01:14:33.962524 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 4.96s 2026-03-07 01:14:33.962528 | orchestrator | neutron : Copying over fwaas_driver.ini --------------------------------- 4.86s 2026-03-07 01:14:33.962532 | orchestrator | neutron : Copying over existing policy file ----------------------------- 4.85s 2026-03-07 01:14:33.962536 | orchestrator | neutron : Copying over openvswitch_agent.ini ---------------------------- 4.70s 2026-03-07 01:14:33.962540 | orchestrator | neutron : Copying over extra ml2 plugins -------------------------------- 4.63s 2026-03-07 01:14:33.962544 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.60s 2026-03-07 01:14:33.962548 | orchestrator | 2026-03-07 01:14:33 | INFO  | Task 2e1dd76f-d9d6-44a4-abb8-becf57ce4644 is in state STARTED 2026-03-07 01:14:33.962552 | orchestrator | 2026-03-07 01:14:33 | INFO  | Task 007c0074-fb98-4428-b2b3-f1170913b207 is in state SUCCESS 2026-03-07 01:14:33.962555 | orchestrator | 2026-03-07 01:14:33 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:14:36.994391 | orchestrator | 2026-03-07 01:14:36 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:14:36.995639 | orchestrator | 2026-03-07 01:14:36 | INFO  | Task 2e1dd76f-d9d6-44a4-abb8-becf57ce4644 is in state STARTED 2026-03-07 01:14:36.997574 | orchestrator | 2026-03-07 01:14:36 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:14:36.997621 | orchestrator | 2026-03-07 01:14:36 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:14:40.045869 | orchestrator | 2026-03-07 01:14:40 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:14:40.046570 | orchestrator | 2026-03-07 01:14:40 | INFO  | Task 2e1dd76f-d9d6-44a4-abb8-becf57ce4644 is in state STARTED 2026-03-07 01:14:40.048758 | orchestrator | 2026-03-07 01:14:40 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:14:40.048780 | orchestrator | 2026-03-07 01:14:40 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:14:43.083973 | orchestrator | 2026-03-07 01:14:43 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:14:43.084057 | orchestrator | 2026-03-07 01:14:43 | INFO  | Task 2e1dd76f-d9d6-44a4-abb8-becf57ce4644 is in state STARTED 2026-03-07 01:14:43.085466 | orchestrator | 2026-03-07 01:14:43 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:14:43.085531 | orchestrator | 2026-03-07 01:14:43 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:14:46.125557 | orchestrator | 2026-03-07 01:14:46 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:14:46.127852 | orchestrator | 2026-03-07 01:14:46 | INFO  | Task 2e1dd76f-d9d6-44a4-abb8-becf57ce4644 is in state STARTED 2026-03-07 01:14:46.129441 | orchestrator | 2026-03-07 01:14:46 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:14:46.129495 | orchestrator | 2026-03-07 01:14:46 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:14:49.166509 | orchestrator | 2026-03-07 01:14:49 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:14:49.167014 | orchestrator | 2026-03-07 01:14:49 | INFO  | Task 2e1dd76f-d9d6-44a4-abb8-becf57ce4644 is in state STARTED 2026-03-07 01:14:49.168251 | orchestrator | 2026-03-07 01:14:49 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:14:49.168289 | orchestrator | 2026-03-07 01:14:49 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:14:52.201529 | orchestrator | 2026-03-07 01:14:52 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:14:52.202080 | orchestrator | 2026-03-07 01:14:52 | INFO  | Task 2e1dd76f-d9d6-44a4-abb8-becf57ce4644 is in state STARTED 2026-03-07 01:14:52.203201 | orchestrator | 2026-03-07 01:14:52 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:14:52.203264 | orchestrator | 2026-03-07 01:14:52 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:14:55.271609 | orchestrator | 2026-03-07 01:14:55 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state STARTED 2026-03-07 01:14:55.272362 | orchestrator | 2026-03-07 01:14:55 | INFO  | Task 2e1dd76f-d9d6-44a4-abb8-becf57ce4644 is in state STARTED 2026-03-07 01:14:55.273497 | orchestrator | 2026-03-07 01:14:55 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:14:55.273933 | orchestrator | 2026-03-07 01:14:55 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:14:58.316297 | orchestrator | 2026-03-07 01:14:58 | INFO  | Task c682d96e-d59b-4319-b175-c83befc59863 is in state SUCCESS 2026-03-07 01:14:58.317614 | orchestrator | 2026-03-07 01:14:58.317665 | orchestrator | 2026-03-07 01:14:58.317675 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-07 01:14:58.317682 | orchestrator | 2026-03-07 01:14:58.317688 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-07 01:14:58.317695 | orchestrator | Saturday 07 March 2026 01:12:42 +0000 (0:00:00.250) 0:00:00.250 ******** 2026-03-07 01:14:58.317701 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:14:58.317708 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:14:58.317715 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:14:58.317721 | orchestrator | 2026-03-07 01:14:58.317728 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-07 01:14:58.317735 | orchestrator | Saturday 07 March 2026 01:12:43 +0000 (0:00:00.456) 0:00:00.706 ******** 2026-03-07 01:14:58.317741 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2026-03-07 01:14:58.317749 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2026-03-07 01:14:58.317755 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2026-03-07 01:14:58.317761 | orchestrator | 2026-03-07 01:14:58.317768 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2026-03-07 01:14:58.317774 | orchestrator | 2026-03-07 01:14:58.317780 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2026-03-07 01:14:58.317787 | orchestrator | Saturday 07 March 2026 01:12:44 +0000 (0:00:01.097) 0:00:01.804 ******** 2026-03-07 01:14:58.317794 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:14:58.317801 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:14:58.317807 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:14:58.317813 | orchestrator | 2026-03-07 01:14:58.317820 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 01:14:58.317827 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 01:14:58.317836 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 01:14:58.317843 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 01:14:58.317850 | orchestrator | 2026-03-07 01:14:58.317856 | orchestrator | 2026-03-07 01:14:58.317862 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 01:14:58.317912 | orchestrator | Saturday 07 March 2026 01:12:45 +0000 (0:00:00.920) 0:00:02.725 ******** 2026-03-07 01:14:58.317920 | orchestrator | =============================================================================== 2026-03-07 01:14:58.317927 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.10s 2026-03-07 01:14:58.317934 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 0.92s 2026-03-07 01:14:58.317940 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.46s 2026-03-07 01:14:58.317946 | orchestrator | 2026-03-07 01:14:58.317952 | orchestrator | 2026-03-07 01:14:58.317959 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-07 01:14:58.317965 | orchestrator | 2026-03-07 01:14:58.317973 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-03-07 01:14:58.317979 | orchestrator | Saturday 07 March 2026 01:04:10 +0000 (0:00:00.336) 0:00:00.336 ******** 2026-03-07 01:14:58.317985 | orchestrator | changed: [testbed-manager] 2026-03-07 01:14:58.317992 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:14:58.317998 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:14:58.318004 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:14:58.318043 | orchestrator | changed: [testbed-node-3] 2026-03-07 01:14:58.318052 | orchestrator | changed: [testbed-node-4] 2026-03-07 01:14:58.318058 | orchestrator | changed: [testbed-node-5] 2026-03-07 01:14:58.318064 | orchestrator | 2026-03-07 01:14:58.318071 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-07 01:14:58.318078 | orchestrator | Saturday 07 March 2026 01:04:11 +0000 (0:00:01.044) 0:00:01.380 ******** 2026-03-07 01:14:58.318084 | orchestrator | changed: [testbed-manager] 2026-03-07 01:14:58.318090 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:14:58.318097 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:14:58.318103 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:14:58.318109 | orchestrator | changed: [testbed-node-3] 2026-03-07 01:14:58.318116 | orchestrator | changed: [testbed-node-4] 2026-03-07 01:14:58.318122 | orchestrator | changed: [testbed-node-5] 2026-03-07 01:14:58.318128 | orchestrator | 2026-03-07 01:14:58.318134 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-07 01:14:58.318141 | orchestrator | Saturday 07 March 2026 01:04:12 +0000 (0:00:00.901) 0:00:02.282 ******** 2026-03-07 01:14:58.318148 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-03-07 01:14:58.318155 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-03-07 01:14:58.318161 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-03-07 01:14:58.318168 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-03-07 01:14:58.318174 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-03-07 01:14:58.318181 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-03-07 01:14:58.318187 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-03-07 01:14:58.318193 | orchestrator | 2026-03-07 01:14:58.318199 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-03-07 01:14:58.318205 | orchestrator | 2026-03-07 01:14:58.318212 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-07 01:14:58.318218 | orchestrator | Saturday 07 March 2026 01:04:14 +0000 (0:00:01.496) 0:00:03.778 ******** 2026-03-07 01:14:58.318225 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:14:58.318231 | orchestrator | 2026-03-07 01:14:58.318237 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-03-07 01:14:58.318243 | orchestrator | Saturday 07 March 2026 01:04:15 +0000 (0:00:01.444) 0:00:05.223 ******** 2026-03-07 01:14:58.318249 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-03-07 01:14:58.318268 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-03-07 01:14:58.318275 | orchestrator | 2026-03-07 01:14:58.318281 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-03-07 01:14:58.318296 | orchestrator | Saturday 07 March 2026 01:04:20 +0000 (0:00:04.930) 0:00:10.154 ******** 2026-03-07 01:14:58.318303 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-07 01:14:58.318309 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-07 01:14:58.318315 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:14:58.318320 | orchestrator | 2026-03-07 01:14:58.318327 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-07 01:14:58.318334 | orchestrator | Saturday 07 March 2026 01:04:25 +0000 (0:00:04.937) 0:00:15.092 ******** 2026-03-07 01:14:58.318342 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:14:58.318351 | orchestrator | 2026-03-07 01:14:58.318364 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-03-07 01:14:58.318371 | orchestrator | Saturday 07 March 2026 01:04:26 +0000 (0:00:00.809) 0:00:15.901 ******** 2026-03-07 01:14:58.318377 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:14:58.318383 | orchestrator | 2026-03-07 01:14:58.318390 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-03-07 01:14:58.318397 | orchestrator | Saturday 07 March 2026 01:04:28 +0000 (0:00:02.008) 0:00:17.910 ******** 2026-03-07 01:14:58.318404 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:14:58.318411 | orchestrator | 2026-03-07 01:14:58.318417 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-07 01:14:58.318423 | orchestrator | Saturday 07 March 2026 01:04:31 +0000 (0:00:03.601) 0:00:21.512 ******** 2026-03-07 01:14:58.318429 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:58.318436 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:58.318442 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:58.318448 | orchestrator | 2026-03-07 01:14:58.318454 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-07 01:14:58.318460 | orchestrator | Saturday 07 March 2026 01:04:32 +0000 (0:00:00.502) 0:00:22.014 ******** 2026-03-07 01:14:58.318466 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:14:58.318473 | orchestrator | 2026-03-07 01:14:58.318480 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-03-07 01:14:58.318486 | orchestrator | Saturday 07 March 2026 01:05:08 +0000 (0:00:36.454) 0:00:58.469 ******** 2026-03-07 01:14:58.318493 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:14:58.318499 | orchestrator | 2026-03-07 01:14:58.318505 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-07 01:14:58.318511 | orchestrator | Saturday 07 March 2026 01:05:24 +0000 (0:00:15.787) 0:01:14.257 ******** 2026-03-07 01:14:58.318517 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:14:58.318523 | orchestrator | 2026-03-07 01:14:58.318528 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-07 01:14:58.318534 | orchestrator | Saturday 07 March 2026 01:05:36 +0000 (0:00:12.159) 0:01:26.417 ******** 2026-03-07 01:14:58.318539 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:14:58.318545 | orchestrator | 2026-03-07 01:14:58.318551 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-03-07 01:14:58.318558 | orchestrator | Saturday 07 March 2026 01:05:39 +0000 (0:00:02.485) 0:01:28.903 ******** 2026-03-07 01:14:58.318564 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:58.318571 | orchestrator | 2026-03-07 01:14:58.318578 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-07 01:14:58.318586 | orchestrator | Saturday 07 March 2026 01:05:40 +0000 (0:00:00.949) 0:01:29.852 ******** 2026-03-07 01:14:58.318594 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:14:58.318602 | orchestrator | 2026-03-07 01:14:58.318608 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-07 01:14:58.318616 | orchestrator | Saturday 07 March 2026 01:05:41 +0000 (0:00:01.210) 0:01:31.063 ******** 2026-03-07 01:14:58.318623 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:14:58.318630 | orchestrator | 2026-03-07 01:14:58.318647 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-07 01:14:58.318655 | orchestrator | Saturday 07 March 2026 01:06:04 +0000 (0:00:22.984) 0:01:54.048 ******** 2026-03-07 01:14:58.318665 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:58.318672 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:58.318680 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:58.318687 | orchestrator | 2026-03-07 01:14:58.318696 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-03-07 01:14:58.318704 | orchestrator | 2026-03-07 01:14:58.318713 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-07 01:14:58.318721 | orchestrator | Saturday 07 March 2026 01:06:04 +0000 (0:00:00.490) 0:01:54.538 ******** 2026-03-07 01:14:58.318729 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:14:58.318737 | orchestrator | 2026-03-07 01:14:58.318746 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-03-07 01:14:58.318754 | orchestrator | Saturday 07 March 2026 01:06:05 +0000 (0:00:00.994) 0:01:55.533 ******** 2026-03-07 01:14:58.318762 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:58.318771 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:58.318779 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:14:58.318788 | orchestrator | 2026-03-07 01:14:58.318796 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-03-07 01:14:58.318804 | orchestrator | Saturday 07 March 2026 01:06:08 +0000 (0:00:02.364) 0:01:57.897 ******** 2026-03-07 01:14:58.318811 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:58.318820 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:58.318827 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:14:58.318835 | orchestrator | 2026-03-07 01:14:58.318844 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-07 01:14:58.318852 | orchestrator | Saturday 07 March 2026 01:06:10 +0000 (0:00:02.481) 0:02:00.378 ******** 2026-03-07 01:14:58.318859 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:58.318867 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:58.318961 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:58.318968 | orchestrator | 2026-03-07 01:14:58.318974 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-07 01:14:58.318980 | orchestrator | Saturday 07 March 2026 01:06:11 +0000 (0:00:01.163) 0:02:01.542 ******** 2026-03-07 01:14:58.318986 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-07 01:14:58.318994 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:58.319003 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-07 01:14:58.319008 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:58.319015 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-07 01:14:58.319022 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-03-07 01:14:58.319028 | orchestrator | 2026-03-07 01:14:58.319034 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-07 01:14:58.319041 | orchestrator | Saturday 07 March 2026 01:06:20 +0000 (0:00:09.098) 0:02:10.640 ******** 2026-03-07 01:14:58.319049 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:58.319055 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:58.319062 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:58.319069 | orchestrator | 2026-03-07 01:14:58.319078 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-07 01:14:58.319084 | orchestrator | Saturday 07 March 2026 01:06:21 +0000 (0:00:00.514) 0:02:11.155 ******** 2026-03-07 01:14:58.319091 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-07 01:14:58.319098 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:58.319104 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-07 01:14:58.319110 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:58.319115 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-07 01:14:58.319132 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:58.319138 | orchestrator | 2026-03-07 01:14:58.319144 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-07 01:14:58.319150 | orchestrator | Saturday 07 March 2026 01:06:22 +0000 (0:00:00.699) 0:02:11.855 ******** 2026-03-07 01:14:58.319156 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:58.319162 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:14:58.319168 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:58.319175 | orchestrator | 2026-03-07 01:14:58.319181 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-03-07 01:14:58.319188 | orchestrator | Saturday 07 March 2026 01:06:22 +0000 (0:00:00.734) 0:02:12.589 ******** 2026-03-07 01:14:58.319194 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:58.319200 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:58.319207 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:14:58.319213 | orchestrator | 2026-03-07 01:14:58.319219 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-03-07 01:14:58.319225 | orchestrator | Saturday 07 March 2026 01:06:24 +0000 (0:00:01.365) 0:02:13.955 ******** 2026-03-07 01:14:58.319232 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:58.319239 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:58.319246 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:14:58.319252 | orchestrator | 2026-03-07 01:14:58.319259 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-03-07 01:14:58.319265 | orchestrator | Saturday 07 March 2026 01:06:27 +0000 (0:00:03.535) 0:02:17.490 ******** 2026-03-07 01:14:58.319271 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:58.319278 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:58.319283 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:14:58.319289 | orchestrator | 2026-03-07 01:14:58.319296 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-07 01:14:58.319302 | orchestrator | Saturday 07 March 2026 01:06:52 +0000 (0:00:25.148) 0:02:42.638 ******** 2026-03-07 01:14:58.319308 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:58.319315 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:58.319321 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:14:58.319329 | orchestrator | 2026-03-07 01:14:58.319338 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-07 01:14:58.319345 | orchestrator | Saturday 07 March 2026 01:07:10 +0000 (0:00:17.030) 0:02:59.668 ******** 2026-03-07 01:14:58.319351 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:14:58.319358 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:58.319364 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:58.319370 | orchestrator | 2026-03-07 01:14:58.319375 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-03-07 01:14:58.319381 | orchestrator | Saturday 07 March 2026 01:07:11 +0000 (0:00:01.060) 0:03:00.729 ******** 2026-03-07 01:14:58.319386 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:58.319392 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:58.319397 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:14:58.319403 | orchestrator | 2026-03-07 01:14:58.319409 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-03-07 01:14:58.319415 | orchestrator | Saturday 07 March 2026 01:07:25 +0000 (0:00:13.978) 0:03:14.707 ******** 2026-03-07 01:14:58.319420 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:58.319426 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:58.319432 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:58.319439 | orchestrator | 2026-03-07 01:14:58.319442 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-07 01:14:58.319446 | orchestrator | Saturday 07 March 2026 01:07:26 +0000 (0:00:01.167) 0:03:15.875 ******** 2026-03-07 01:14:58.319450 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:58.319454 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:58.319458 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:58.319468 | orchestrator | 2026-03-07 01:14:58.319472 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-03-07 01:14:58.319475 | orchestrator | 2026-03-07 01:14:58.319479 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-07 01:14:58.319483 | orchestrator | Saturday 07 March 2026 01:07:26 +0000 (0:00:00.586) 0:03:16.462 ******** 2026-03-07 01:14:58.319487 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:14:58.319498 | orchestrator | 2026-03-07 01:14:58.319516 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-03-07 01:14:58.319520 | orchestrator | Saturday 07 March 2026 01:07:27 +0000 (0:00:00.623) 0:03:17.085 ******** 2026-03-07 01:14:58.319524 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-03-07 01:14:58.319529 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-03-07 01:14:58.319535 | orchestrator | 2026-03-07 01:14:58.319541 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-03-07 01:14:58.319547 | orchestrator | Saturday 07 March 2026 01:07:31 +0000 (0:00:03.637) 0:03:20.723 ******** 2026-03-07 01:14:58.319553 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-03-07 01:14:58.319560 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-03-07 01:14:58.319567 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-03-07 01:14:58.319573 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-03-07 01:14:58.319579 | orchestrator | 2026-03-07 01:14:58.319585 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-03-07 01:14:58.319592 | orchestrator | Saturday 07 March 2026 01:07:38 +0000 (0:00:07.589) 0:03:28.313 ******** 2026-03-07 01:14:58.319598 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-07 01:14:58.319604 | orchestrator | 2026-03-07 01:14:58.319610 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-03-07 01:14:58.319616 | orchestrator | Saturday 07 March 2026 01:07:42 +0000 (0:00:03.601) 0:03:31.915 ******** 2026-03-07 01:14:58.319622 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-07 01:14:58.319629 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-03-07 01:14:58.319633 | orchestrator | 2026-03-07 01:14:58.319637 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-03-07 01:14:58.319640 | orchestrator | Saturday 07 March 2026 01:07:47 +0000 (0:00:04.814) 0:03:36.729 ******** 2026-03-07 01:14:58.319644 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-07 01:14:58.319648 | orchestrator | 2026-03-07 01:14:58.319651 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-03-07 01:14:58.319655 | orchestrator | Saturday 07 March 2026 01:07:50 +0000 (0:00:03.863) 0:03:40.593 ******** 2026-03-07 01:14:58.319659 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-03-07 01:14:58.319663 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-03-07 01:14:58.319666 | orchestrator | 2026-03-07 01:14:58.319670 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-07 01:14:58.319674 | orchestrator | Saturday 07 March 2026 01:07:59 +0000 (0:00:08.494) 0:03:49.088 ******** 2026-03-07 01:14:58.319682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-07 01:14:58.319703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-07 01:14:58.319709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-07 01:14:58.319714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-07 01:14:58.319722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-07 01:14:58.319736 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-07 01:14:58.319743 | orchestrator | 2026-03-07 01:14:58.319749 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-03-07 01:14:58.319756 | orchestrator | Saturday 07 March 2026 01:08:00 +0000 (0:00:01.524) 0:03:50.612 ******** 2026-03-07 01:14:58.319762 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:58.319769 | orchestrator | 2026-03-07 01:14:58.319775 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-03-07 01:14:58.319781 | orchestrator | Saturday 07 March 2026 01:08:01 +0000 (0:00:00.232) 0:03:50.845 ******** 2026-03-07 01:14:58.319787 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:58.319793 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:58.319799 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:58.319806 | orchestrator | 2026-03-07 01:14:58.319816 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-03-07 01:14:58.319827 | orchestrator | Saturday 07 March 2026 01:08:02 +0000 (0:00:00.922) 0:03:51.767 ******** 2026-03-07 01:14:58.319834 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-07 01:14:58.319840 | orchestrator | 2026-03-07 01:14:58.319846 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-03-07 01:14:58.319852 | orchestrator | Saturday 07 March 2026 01:08:04 +0000 (0:00:02.309) 0:03:54.080 ******** 2026-03-07 01:14:58.319859 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:58.319865 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:58.319886 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:58.319893 | orchestrator | 2026-03-07 01:14:58.319899 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-07 01:14:58.319905 | orchestrator | Saturday 07 March 2026 01:08:05 +0000 (0:00:00.769) 0:03:54.850 ******** 2026-03-07 01:14:58.319912 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:14:58.319916 | orchestrator | 2026-03-07 01:14:58.319920 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-07 01:14:58.319924 | orchestrator | Saturday 07 March 2026 01:08:06 +0000 (0:00:01.031) 0:03:55.882 ******** 2026-03-07 01:14:58.319928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-07 01:14:58.319937 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-07 01:14:58.319941 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-07 01:14:58.319953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-07 01:14:58.319958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-07 01:14:58.319967 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-07 01:14:58.319971 | orchestrator | 2026-03-07 01:14:58.319975 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-07 01:14:58.319979 | orchestrator | Saturday 07 March 2026 01:08:11 +0000 (0:00:05.457) 0:04:01.340 ******** 2026-03-07 01:14:58.319983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-07 01:14:58.319994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 01:14:58.320000 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:58.320005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-07 01:14:58.320016 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 01:14:58.320020 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:58.320025 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-07 01:14:58.320030 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 01:14:58.320034 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:58.320039 | orchestrator | 2026-03-07 01:14:58.320046 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-07 01:14:58.320053 | orchestrator | Saturday 07 March 2026 01:08:13 +0000 (0:00:02.119) 0:04:03.459 ******** 2026-03-07 01:14:58.320059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-07 01:14:58.320071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-07 01:14:58.320078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 01:14:58.320084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 01:14:58.320091 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:58.320098 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:58.320114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-07 01:14:58.320125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 01:14:58.320129 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:58.320134 | orchestrator | 2026-03-07 01:14:58.320139 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-03-07 01:14:58.320143 | orchestrator | Saturday 07 March 2026 01:08:17 +0000 (0:00:03.687) 0:04:07.147 ******** 2026-03-07 01:14:58.320148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-07 01:14:58.320159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-07 01:14:58.320164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-07 01:14:58.320172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-07 01:14:58.320177 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-07 01:14:58.320182 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-07 01:14:58.320187 | orchestrator | 2026-03-07 01:14:58.320191 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-03-07 01:14:58.320195 | orchestrator | Saturday 07 March 2026 01:08:23 +0000 (0:00:05.970) 0:04:13.117 ******** 2026-03-07 01:14:58.320207 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-07 01:14:58.320216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-07 01:14:58.320221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-07 01:14:58.320226 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-07 01:14:58.320237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-07 01:14:58.320241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-07 01:14:58.320248 | orchestrator | 2026-03-07 01:14:58.320253 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-03-07 01:14:58.320257 | orchestrator | Saturday 07 March 2026 01:08:37 +0000 (0:00:14.385) 0:04:27.502 ******** 2026-03-07 01:14:58.320262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-07 01:14:58.320267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 01:14:58.320271 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:58.320276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-07 01:14:58.320287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 01:14:58.320295 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:58.320300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-07 01:14:58.320304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-07 01:14:58.320309 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:58.320313 | orchestrator | 2026-03-07 01:14:58.320318 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-03-07 01:14:58.320322 | orchestrator | Saturday 07 March 2026 01:08:40 +0000 (0:00:02.181) 0:04:29.684 ******** 2026-03-07 01:14:58.320327 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:14:58.320331 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:14:58.320335 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:14:58.320340 | orchestrator | 2026-03-07 01:14:58.320344 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-03-07 01:14:58.320349 | orchestrator | Saturday 07 March 2026 01:08:43 +0000 (0:00:03.469) 0:04:33.153 ******** 2026-03-07 01:14:58.320353 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:58.320357 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:58.320362 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:58.320366 | orchestrator | 2026-03-07 01:14:58.320371 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-03-07 01:14:58.320375 | orchestrator | Saturday 07 March 2026 01:08:43 +0000 (0:00:00.393) 0:04:33.547 ******** 2026-03-07 01:14:58.320387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-07 01:14:58.320395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-07 01:14:58.320400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-07 01:14:58.320404 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-07 01:14:58.320412 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-07 01:14:58.320422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-07 01:14:58.320426 | orchestrator | 2026-03-07 01:14:58.320430 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-07 01:14:58.320434 | orchestrator | Saturday 07 March 2026 01:08:47 +0000 (0:00:03.586) 0:04:37.133 ******** 2026-03-07 01:14:58.320438 | orchestrator | 2026-03-07 01:14:58.320441 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-07 01:14:58.320445 | orchestrator | Saturday 07 March 2026 01:08:47 +0000 (0:00:00.160) 0:04:37.294 ******** 2026-03-07 01:14:58.320449 | orchestrator | 2026-03-07 01:14:58.320452 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-07 01:14:58.320456 | orchestrator | Saturday 07 March 2026 01:08:47 +0000 (0:00:00.153) 0:04:37.448 ******** 2026-03-07 01:14:58.320460 | orchestrator | 2026-03-07 01:14:58.320464 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-03-07 01:14:58.320468 | orchestrator | Saturday 07 March 2026 01:08:47 +0000 (0:00:00.158) 0:04:37.606 ******** 2026-03-07 01:14:58.320472 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:14:58.320475 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:14:58.320479 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:14:58.320483 | orchestrator | 2026-03-07 01:14:58.320487 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-03-07 01:14:58.320491 | orchestrator | Saturday 07 March 2026 01:09:12 +0000 (0:00:24.694) 0:05:02.300 ******** 2026-03-07 01:14:58.320494 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:14:58.320498 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:14:58.320502 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:14:58.320506 | orchestrator | 2026-03-07 01:14:58.320509 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-03-07 01:14:58.320513 | orchestrator | 2026-03-07 01:14:58.320517 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-07 01:14:58.320521 | orchestrator | Saturday 07 March 2026 01:09:28 +0000 (0:00:16.069) 0:05:18.370 ******** 2026-03-07 01:14:58.320525 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:14:58.320529 | orchestrator | 2026-03-07 01:14:58.320533 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-07 01:14:58.320536 | orchestrator | Saturday 07 March 2026 01:09:30 +0000 (0:00:01.680) 0:05:20.050 ******** 2026-03-07 01:14:58.320540 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:14:58.320544 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:14:58.320548 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:14:58.320551 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:58.320555 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:58.320559 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:58.320565 | orchestrator | 2026-03-07 01:14:58.320569 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-03-07 01:14:58.320573 | orchestrator | Saturday 07 March 2026 01:09:31 +0000 (0:00:01.497) 0:05:21.548 ******** 2026-03-07 01:14:58.320577 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:58.320581 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:58.320584 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:58.320588 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 01:14:58.320592 | orchestrator | 2026-03-07 01:14:58.320596 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-07 01:14:58.320599 | orchestrator | Saturday 07 March 2026 01:09:34 +0000 (0:00:02.265) 0:05:23.813 ******** 2026-03-07 01:14:58.320603 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-03-07 01:14:58.320607 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-03-07 01:14:58.320611 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-03-07 01:14:58.320615 | orchestrator | 2026-03-07 01:14:58.320618 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-07 01:14:58.320622 | orchestrator | Saturday 07 March 2026 01:09:36 +0000 (0:00:01.846) 0:05:25.659 ******** 2026-03-07 01:14:58.320626 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-03-07 01:14:58.320630 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-03-07 01:14:58.320634 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-03-07 01:14:58.320637 | orchestrator | 2026-03-07 01:14:58.320641 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-07 01:14:58.320645 | orchestrator | Saturday 07 March 2026 01:09:38 +0000 (0:00:02.675) 0:05:28.335 ******** 2026-03-07 01:14:58.320649 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-03-07 01:14:58.320653 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:14:58.320656 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-03-07 01:14:58.320660 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:14:58.320664 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-03-07 01:14:58.320668 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:14:58.320671 | orchestrator | 2026-03-07 01:14:58.320675 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-03-07 01:14:58.320679 | orchestrator | Saturday 07 March 2026 01:09:41 +0000 (0:00:02.347) 0:05:30.682 ******** 2026-03-07 01:14:58.320685 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-07 01:14:58.320692 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-07 01:14:58.320696 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-07 01:14:58.320699 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-07 01:14:58.320703 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-07 01:14:58.320707 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:58.320711 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-07 01:14:58.320714 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-07 01:14:58.320718 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-07 01:14:58.320722 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-07 01:14:58.320726 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:58.320730 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-07 01:14:58.320733 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-07 01:14:58.320737 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:58.320741 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-07 01:14:58.320749 | orchestrator | 2026-03-07 01:14:58.320756 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-03-07 01:14:58.320761 | orchestrator | Saturday 07 March 2026 01:09:42 +0000 (0:00:01.898) 0:05:32.581 ******** 2026-03-07 01:14:58.320767 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:58.320778 | orchestrator | changed: [testbed-node-3] 2026-03-07 01:14:58.320785 | orchestrator | changed: [testbed-node-4] 2026-03-07 01:14:58.320793 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:58.320798 | orchestrator | changed: [testbed-node-5] 2026-03-07 01:14:58.320803 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:58.320808 | orchestrator | 2026-03-07 01:14:58.320814 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-03-07 01:14:58.320820 | orchestrator | Saturday 07 March 2026 01:09:44 +0000 (0:00:01.907) 0:05:34.489 ******** 2026-03-07 01:14:58.320826 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:58.320832 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:58.320837 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:58.320843 | orchestrator | changed: [testbed-node-3] 2026-03-07 01:14:58.320849 | orchestrator | changed: [testbed-node-4] 2026-03-07 01:14:58.320855 | orchestrator | changed: [testbed-node-5] 2026-03-07 01:14:58.320860 | orchestrator | 2026-03-07 01:14:58.320866 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-07 01:14:58.320887 | orchestrator | Saturday 07 March 2026 01:09:47 +0000 (0:00:02.720) 0:05:37.209 ******** 2026-03-07 01:14:58.320894 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-07 01:14:58.320902 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-07 01:14:58.320917 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-07 01:14:58.320925 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-07 01:14:58.320936 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-07 01:14:58.320942 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-07 01:14:58.320949 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-07 01:14:58.320956 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-07 01:14:58.320971 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-07 01:14:58.320982 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-07 01:14:58.320988 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-07 01:14:58.320994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-07 01:14:58.321001 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:14:58.321007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:14:58.321301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:14:58.321327 | orchestrator | 2026-03-07 01:14:58.321332 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-07 01:14:58.321336 | orchestrator | Saturday 07 March 2026 01:09:53 +0000 (0:00:05.802) 0:05:43.012 ******** 2026-03-07 01:14:58.321341 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:14:58.321347 | orchestrator | 2026-03-07 01:14:58.321351 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-07 01:14:58.321354 | orchestrator | Saturday 07 March 2026 01:09:56 +0000 (0:00:03.320) 0:05:46.333 ******** 2026-03-07 01:14:58.321359 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-07 01:14:58.321364 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-07 01:14:58.321368 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-07 01:14:58.321372 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-07 01:14:58.321390 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-07 01:14:58.321394 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-07 01:14:58.321399 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-07 01:14:58.321403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-07 01:14:58.321407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-07 01:14:58.321411 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-07 01:14:58.321426 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-07 01:14:58.321430 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-07 01:14:58.321434 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:14:58.321438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:14:58.321442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:14:58.321446 | orchestrator | 2026-03-07 01:14:58.321450 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-07 01:14:58.321455 | orchestrator | Saturday 07 March 2026 01:10:05 +0000 (0:00:08.830) 0:05:55.163 ******** 2026-03-07 01:14:58.321459 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-07 01:14:58.321474 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-07 01:14:58.321478 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-07 01:14:58.321482 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-07 01:14:58.321486 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-07 01:14:58.321490 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-07 01:14:58.321497 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:14:58.321502 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:14:58.321511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-07 01:14:58.321515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-07 01:14:58.321519 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:58.321523 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-07 01:14:58.321527 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-07 01:14:58.321531 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-07 01:14:58.321538 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:14:58.321543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-07 01:14:58.321551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-07 01:14:58.321555 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:58.321560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-07 01:14:58.321564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-07 01:14:58.321568 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:58.321571 | orchestrator | 2026-03-07 01:14:58.321575 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-07 01:14:58.321579 | orchestrator | Saturday 07 March 2026 01:10:08 +0000 (0:00:02.884) 0:05:58.048 ******** 2026-03-07 01:14:58.321583 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-07 01:14:58.321587 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-07 01:14:58.321594 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-07 01:14:58.321598 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:14:58.321607 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-07 01:14:58.321612 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-07 01:14:58.321616 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-07 01:14:58.321620 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:14:58.321624 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-07 01:14:58.321631 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-07 01:14:58.321640 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-07 01:14:58.321644 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:14:58.321648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-07 01:14:58.321652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-07 01:14:58.321656 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:58.321660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-07 01:14:58.321668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-07 01:14:58.321671 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:58.321675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-07 01:14:58.321682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-07 01:14:58.321688 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:58.321692 | orchestrator | 2026-03-07 01:14:58.321696 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-07 01:14:58.321700 | orchestrator | Saturday 07 March 2026 01:10:13 +0000 (0:00:04.932) 0:06:02.980 ******** 2026-03-07 01:14:58.321704 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:58.321708 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:58.321712 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:58.321716 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-07 01:14:58.321719 | orchestrator | 2026-03-07 01:14:58.321723 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-03-07 01:14:58.321727 | orchestrator | Saturday 07 March 2026 01:10:14 +0000 (0:00:01.578) 0:06:04.559 ******** 2026-03-07 01:14:58.321731 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-07 01:14:58.321735 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-07 01:14:58.321738 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-07 01:14:58.321742 | orchestrator | 2026-03-07 01:14:58.321746 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-03-07 01:14:58.321750 | orchestrator | Saturday 07 March 2026 01:10:16 +0000 (0:00:01.126) 0:06:05.686 ******** 2026-03-07 01:14:58.321753 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-07 01:14:58.321757 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-07 01:14:58.321761 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-07 01:14:58.321765 | orchestrator | 2026-03-07 01:14:58.321769 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-03-07 01:14:58.321773 | orchestrator | Saturday 07 March 2026 01:10:17 +0000 (0:00:01.058) 0:06:06.744 ******** 2026-03-07 01:14:58.321776 | orchestrator | ok: [testbed-node-3] 2026-03-07 01:14:58.321781 | orchestrator | ok: [testbed-node-4] 2026-03-07 01:14:58.321785 | orchestrator | ok: [testbed-node-5] 2026-03-07 01:14:58.321792 | orchestrator | 2026-03-07 01:14:58.321796 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-03-07 01:14:58.321800 | orchestrator | Saturday 07 March 2026 01:10:17 +0000 (0:00:00.807) 0:06:07.551 ******** 2026-03-07 01:14:58.321803 | orchestrator | ok: [testbed-node-3] 2026-03-07 01:14:58.321807 | orchestrator | ok: [testbed-node-4] 2026-03-07 01:14:58.321811 | orchestrator | ok: [testbed-node-5] 2026-03-07 01:14:58.321815 | orchestrator | 2026-03-07 01:14:58.321820 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-03-07 01:14:58.321825 | orchestrator | Saturday 07 March 2026 01:10:19 +0000 (0:00:01.177) 0:06:08.729 ******** 2026-03-07 01:14:58.321831 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-07 01:14:58.321838 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-07 01:14:58.321843 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-07 01:14:58.321849 | orchestrator | 2026-03-07 01:14:58.321856 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-03-07 01:14:58.321863 | orchestrator | Saturday 07 March 2026 01:10:20 +0000 (0:00:01.643) 0:06:10.372 ******** 2026-03-07 01:14:58.321911 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-07 01:14:58.321919 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-07 01:14:58.321925 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-07 01:14:58.321932 | orchestrator | 2026-03-07 01:14:58.321938 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-03-07 01:14:58.321944 | orchestrator | Saturday 07 March 2026 01:10:22 +0000 (0:00:01.421) 0:06:11.794 ******** 2026-03-07 01:14:58.321950 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-07 01:14:58.321957 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-07 01:14:58.321963 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-07 01:14:58.321969 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-03-07 01:14:58.321977 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-03-07 01:14:58.321984 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-03-07 01:14:58.321992 | orchestrator | 2026-03-07 01:14:58.321999 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-03-07 01:14:58.322006 | orchestrator | Saturday 07 March 2026 01:10:27 +0000 (0:00:04.852) 0:06:16.646 ******** 2026-03-07 01:14:58.322047 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:14:58.322053 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:14:58.322058 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:14:58.322062 | orchestrator | 2026-03-07 01:14:58.322067 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-03-07 01:14:58.322071 | orchestrator | Saturday 07 March 2026 01:10:27 +0000 (0:00:00.634) 0:06:17.281 ******** 2026-03-07 01:14:58.322076 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:14:58.322081 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:14:58.322085 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:14:58.322090 | orchestrator | 2026-03-07 01:14:58.322094 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-03-07 01:14:58.322099 | orchestrator | Saturday 07 March 2026 01:10:27 +0000 (0:00:00.345) 0:06:17.627 ******** 2026-03-07 01:14:58.322103 | orchestrator | changed: [testbed-node-3] 2026-03-07 01:14:58.322108 | orchestrator | changed: [testbed-node-4] 2026-03-07 01:14:58.322112 | orchestrator | changed: [testbed-node-5] 2026-03-07 01:14:58.322117 | orchestrator | 2026-03-07 01:14:58.322122 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-03-07 01:14:58.322126 | orchestrator | Saturday 07 March 2026 01:10:29 +0000 (0:00:01.481) 0:06:19.108 ******** 2026-03-07 01:14:58.322131 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-07 01:14:58.322137 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-07 01:14:58.322155 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-07 01:14:58.322159 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-07 01:14:58.322164 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-07 01:14:58.322168 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-07 01:14:58.322171 | orchestrator | 2026-03-07 01:14:58.322175 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-03-07 01:14:58.322179 | orchestrator | Saturday 07 March 2026 01:10:34 +0000 (0:00:04.617) 0:06:23.725 ******** 2026-03-07 01:14:58.322183 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-07 01:14:58.322187 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-07 01:14:58.322191 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-07 01:14:58.322195 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-07 01:14:58.322199 | orchestrator | changed: [testbed-node-3] 2026-03-07 01:14:58.322202 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-07 01:14:58.322206 | orchestrator | changed: [testbed-node-4] 2026-03-07 01:14:58.322210 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-07 01:14:58.322214 | orchestrator | changed: [testbed-node-5] 2026-03-07 01:14:58.322218 | orchestrator | 2026-03-07 01:14:58.322221 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-03-07 01:14:58.322225 | orchestrator | Saturday 07 March 2026 01:10:37 +0000 (0:00:03.785) 0:06:27.511 ******** 2026-03-07 01:14:58.322229 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:14:58.322233 | orchestrator | 2026-03-07 01:14:58.322237 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-03-07 01:14:58.322241 | orchestrator | Saturday 07 March 2026 01:10:38 +0000 (0:00:00.147) 0:06:27.658 ******** 2026-03-07 01:14:58.322245 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:14:58.322248 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:14:58.322252 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:14:58.322256 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:58.322260 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:58.322264 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:58.322268 | orchestrator | 2026-03-07 01:14:58.322272 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-03-07 01:14:58.322275 | orchestrator | Saturday 07 March 2026 01:10:38 +0000 (0:00:00.785) 0:06:28.444 ******** 2026-03-07 01:14:58.322279 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-07 01:14:58.322283 | orchestrator | 2026-03-07 01:14:58.322287 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-03-07 01:14:58.322291 | orchestrator | Saturday 07 March 2026 01:10:39 +0000 (0:00:00.775) 0:06:29.219 ******** 2026-03-07 01:14:58.322295 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:14:58.322298 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:14:58.322302 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:14:58.322306 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:58.322310 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:58.322314 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:58.322317 | orchestrator | 2026-03-07 01:14:58.322321 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-03-07 01:14:58.322325 | orchestrator | Saturday 07 March 2026 01:10:40 +0000 (0:00:00.917) 0:06:30.136 ******** 2026-03-07 01:14:58.322329 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-07 01:14:58.322346 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-07 01:14:58.322350 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-07 01:14:58.322354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-07 01:14:58.322359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-07 01:14:58.322363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-07 01:14:58.322372 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-07 01:14:58.322405 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-07 01:14:58.322412 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-07 01:14:58.322416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:14:58.322421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:14:58.322425 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-07 01:14:58.322432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:14:58.322436 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-07 01:14:58.322454 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-07 01:14:58.322459 | orchestrator | 2026-03-07 01:14:58.322462 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-03-07 01:14:58.322466 | orchestrator | Saturday 07 March 2026 01:10:44 +0000 (0:00:04.231) 0:06:34.368 ******** 2026-03-07 01:14:58.322470 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-07 01:14:58.322474 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-07 01:14:58.322482 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-07 01:14:58.322486 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-07 01:14:58.322503 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-07 01:14:58.322508 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-07 01:14:58.322512 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-07 01:14:58.322520 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-07 01:14:58.322524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-07 01:14:58.322534 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-07 01:14:58.322538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-07 01:14:58.322542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-07 01:14:58.322547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:14:58.322554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:14:58.322559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:14:58.322563 | orchestrator | 2026-03-07 01:14:58.322567 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-03-07 01:14:58.322570 | orchestrator | Saturday 07 March 2026 01:10:52 +0000 (0:00:08.213) 0:06:42.582 ******** 2026-03-07 01:14:58.322574 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:14:58.322578 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:14:58.322582 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:58.322586 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:58.322590 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:58.322593 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:14:58.322597 | orchestrator | 2026-03-07 01:14:58.322601 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-03-07 01:14:58.322605 | orchestrator | Saturday 07 March 2026 01:10:55 +0000 (0:00:02.138) 0:06:44.720 ******** 2026-03-07 01:14:58.322608 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-07 01:14:58.322612 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-07 01:14:58.322616 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-07 01:14:58.322620 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-07 01:14:58.322628 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-07 01:14:58.322634 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:58.322638 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-07 01:14:58.322642 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:58.322646 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-07 01:14:58.322650 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-07 01:14:58.322653 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:58.322658 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-07 01:14:58.322661 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-07 01:14:58.322666 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-07 01:14:58.322670 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-07 01:14:58.322674 | orchestrator | 2026-03-07 01:14:58.322677 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-03-07 01:14:58.322685 | orchestrator | Saturday 07 March 2026 01:11:00 +0000 (0:00:05.049) 0:06:49.769 ******** 2026-03-07 01:14:58.322690 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:14:58.322693 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:14:58.322697 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:14:58.322701 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:58.322705 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:58.322709 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:58.322713 | orchestrator | 2026-03-07 01:14:58.322717 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-03-07 01:14:58.322721 | orchestrator | Saturday 07 March 2026 01:11:00 +0000 (0:00:00.675) 0:06:50.445 ******** 2026-03-07 01:14:58.322725 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-07 01:14:58.322729 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-07 01:14:58.322733 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-07 01:14:58.322737 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-07 01:14:58.322741 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-07 01:14:58.322745 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-07 01:14:58.322748 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-07 01:14:58.322752 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-07 01:14:58.322756 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-07 01:14:58.322760 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-07 01:14:58.322763 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:58.322767 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-07 01:14:58.322771 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:58.322775 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-07 01:14:58.322779 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-07 01:14:58.322782 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:58.322786 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-07 01:14:58.322790 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-07 01:14:58.322794 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-07 01:14:58.322798 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-07 01:14:58.322802 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-07 01:14:58.322805 | orchestrator | 2026-03-07 01:14:58.322809 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-03-07 01:14:58.322813 | orchestrator | Saturday 07 March 2026 01:11:07 +0000 (0:00:06.488) 0:06:56.934 ******** 2026-03-07 01:14:58.322817 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-07 01:14:58.322820 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-07 01:14:58.322831 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-07 01:14:58.322838 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-07 01:14:58.322842 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-07 01:14:58.322845 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-07 01:14:58.322849 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-07 01:14:58.322853 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-07 01:14:58.322857 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-07 01:14:58.322861 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-07 01:14:58.322864 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-07 01:14:58.322885 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-07 01:14:58.322890 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-07 01:14:58.322894 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-07 01:14:58.322898 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-07 01:14:58.322902 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:58.322906 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-07 01:14:58.322909 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-07 01:14:58.322913 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:58.322917 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-07 01:14:58.322921 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-07 01:14:58.322925 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:58.322929 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-07 01:14:58.322933 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-07 01:14:58.322937 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-07 01:14:58.322941 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-07 01:14:58.322945 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-07 01:14:58.322949 | orchestrator | 2026-03-07 01:14:58.322953 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-03-07 01:14:58.322957 | orchestrator | Saturday 07 March 2026 01:11:17 +0000 (0:00:10.523) 0:07:07.457 ******** 2026-03-07 01:14:58.322961 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:14:58.322965 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:14:58.322969 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:14:58.322973 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:58.322977 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:58.322981 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:58.322985 | orchestrator | 2026-03-07 01:14:58.322989 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-03-07 01:14:58.322992 | orchestrator | Saturday 07 March 2026 01:11:18 +0000 (0:00:01.073) 0:07:08.530 ******** 2026-03-07 01:14:58.322996 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:14:58.323000 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:14:58.323004 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:14:58.323008 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:58.323018 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:58.323022 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:58.323026 | orchestrator | 2026-03-07 01:14:58.323030 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-03-07 01:14:58.323034 | orchestrator | Saturday 07 March 2026 01:11:19 +0000 (0:00:00.699) 0:07:09.230 ******** 2026-03-07 01:14:58.323038 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:58.323042 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:58.323045 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:58.323049 | orchestrator | changed: [testbed-node-4] 2026-03-07 01:14:58.323053 | orchestrator | changed: [testbed-node-3] 2026-03-07 01:14:58.323057 | orchestrator | changed: [testbed-node-5] 2026-03-07 01:14:58.323060 | orchestrator | 2026-03-07 01:14:58.323064 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-03-07 01:14:58.323068 | orchestrator | Saturday 07 March 2026 01:11:22 +0000 (0:00:02.549) 0:07:11.779 ******** 2026-03-07 01:14:58.323079 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-07 01:14:58.323083 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-07 01:14:58.323087 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-07 01:14:58.323091 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:14:58.323096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-07 01:14:58.323105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-07 01:14:58.323109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-07 01:14:58.323113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-07 01:14:58.323117 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:58.323124 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:58.323132 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-07 01:14:58.323136 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-07 01:14:58.323140 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-07 01:14:58.323148 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:14:58.323152 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-07 01:14:58.323156 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-07 01:14:58.323168 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-07 01:14:58.323172 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:14:58.323177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-07 01:14:58.323181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-07 01:14:58.323189 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:58.323193 | orchestrator | 2026-03-07 01:14:58.323197 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-03-07 01:14:58.323201 | orchestrator | Saturday 07 March 2026 01:11:24 +0000 (0:00:02.007) 0:07:13.787 ******** 2026-03-07 01:14:58.323205 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-07 01:14:58.323210 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-07 01:14:58.323215 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:14:58.323221 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-07 01:14:58.323227 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-07 01:14:58.323234 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:14:58.323240 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-07 01:14:58.323246 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-07 01:14:58.323252 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-07 01:14:58.323258 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-07 01:14:58.323264 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:14:58.323270 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-07 01:14:58.323275 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-07 01:14:58.323281 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:58.323288 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:58.323294 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-07 01:14:58.323300 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-07 01:14:58.323308 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:58.323317 | orchestrator | 2026-03-07 01:14:58.323325 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-03-07 01:14:58.323337 | orchestrator | Saturday 07 March 2026 01:11:25 +0000 (0:00:01.052) 0:07:14.839 ******** 2026-03-07 01:14:58.323345 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-07 01:14:58.323360 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-07 01:14:58.323367 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-07 01:14:58.323379 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-07 01:14:58.323386 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-07 01:14:58.323393 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-07 01:14:58.323403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-07 01:14:58.323411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-07 01:14:58.323415 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:14:58.323423 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-07 01:14:58.323427 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-07 01:14:58.323431 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-07 01:14:58.323435 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:14:58.323444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-07 01:14:58.323449 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-07 01:14:58.323456 | orchestrator | 2026-03-07 01:14:58.323460 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-07 01:14:58.323464 | orchestrator | Saturday 07 March 2026 01:11:28 +0000 (0:00:03.439) 0:07:18.278 ******** 2026-03-07 01:14:58.323468 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:14:58.323472 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:14:58.323476 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:14:58.323480 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:58.323484 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:58.323487 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:58.323491 | orchestrator | 2026-03-07 01:14:58.323495 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-07 01:14:58.323499 | orchestrator | Saturday 07 March 2026 01:11:29 +0000 (0:00:00.845) 0:07:19.124 ******** 2026-03-07 01:14:58.323503 | orchestrator | 2026-03-07 01:14:58.323507 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-07 01:14:58.323510 | orchestrator | Saturday 07 March 2026 01:11:29 +0000 (0:00:00.149) 0:07:19.274 ******** 2026-03-07 01:14:58.323514 | orchestrator | 2026-03-07 01:14:58.323518 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-07 01:14:58.323522 | orchestrator | Saturday 07 March 2026 01:11:29 +0000 (0:00:00.139) 0:07:19.413 ******** 2026-03-07 01:14:58.323526 | orchestrator | 2026-03-07 01:14:58.323530 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-07 01:14:58.323534 | orchestrator | Saturday 07 March 2026 01:11:29 +0000 (0:00:00.136) 0:07:19.549 ******** 2026-03-07 01:14:58.323538 | orchestrator | 2026-03-07 01:14:58.323542 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-07 01:14:58.323546 | orchestrator | Saturday 07 March 2026 01:11:30 +0000 (0:00:00.138) 0:07:19.688 ******** 2026-03-07 01:14:58.323550 | orchestrator | 2026-03-07 01:14:58.323553 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-07 01:14:58.323557 | orchestrator | Saturday 07 March 2026 01:11:30 +0000 (0:00:00.341) 0:07:20.030 ******** 2026-03-07 01:14:58.323561 | orchestrator | 2026-03-07 01:14:58.323565 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-03-07 01:14:58.323569 | orchestrator | Saturday 07 March 2026 01:11:30 +0000 (0:00:00.244) 0:07:20.274 ******** 2026-03-07 01:14:58.323572 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:14:58.323576 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:14:58.323580 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:14:58.323584 | orchestrator | 2026-03-07 01:14:58.323588 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-03-07 01:14:58.323592 | orchestrator | Saturday 07 March 2026 01:11:39 +0000 (0:00:08.904) 0:07:29.179 ******** 2026-03-07 01:14:58.323595 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:14:58.323599 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:14:58.323603 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:14:58.323607 | orchestrator | 2026-03-07 01:14:58.323611 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-03-07 01:14:58.323614 | orchestrator | Saturday 07 March 2026 01:12:02 +0000 (0:00:23.069) 0:07:52.248 ******** 2026-03-07 01:14:58.323618 | orchestrator | changed: [testbed-node-4] 2026-03-07 01:14:58.323627 | orchestrator | changed: [testbed-node-5] 2026-03-07 01:14:58.323630 | orchestrator | changed: [testbed-node-3] 2026-03-07 01:14:58.323634 | orchestrator | 2026-03-07 01:14:58.323638 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-03-07 01:14:58.323642 | orchestrator | Saturday 07 March 2026 01:12:30 +0000 (0:00:27.835) 0:08:20.083 ******** 2026-03-07 01:14:58.323646 | orchestrator | changed: [testbed-node-3] 2026-03-07 01:14:58.323650 | orchestrator | changed: [testbed-node-5] 2026-03-07 01:14:58.323654 | orchestrator | changed: [testbed-node-4] 2026-03-07 01:14:58.323658 | orchestrator | 2026-03-07 01:14:58.323662 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-03-07 01:14:58.323666 | orchestrator | Saturday 07 March 2026 01:13:09 +0000 (0:00:39.121) 0:08:59.204 ******** 2026-03-07 01:14:58.323669 | orchestrator | changed: [testbed-node-5] 2026-03-07 01:14:58.323673 | orchestrator | changed: [testbed-node-3] 2026-03-07 01:14:58.323677 | orchestrator | changed: [testbed-node-4] 2026-03-07 01:14:58.323681 | orchestrator | 2026-03-07 01:14:58.323688 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-03-07 01:14:58.323695 | orchestrator | Saturday 07 March 2026 01:13:10 +0000 (0:00:00.957) 0:09:00.162 ******** 2026-03-07 01:14:58.323699 | orchestrator | changed: [testbed-node-3] 2026-03-07 01:14:58.323703 | orchestrator | changed: [testbed-node-4] 2026-03-07 01:14:58.323706 | orchestrator | changed: [testbed-node-5] 2026-03-07 01:14:58.323710 | orchestrator | 2026-03-07 01:14:58.323714 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-03-07 01:14:58.323718 | orchestrator | Saturday 07 March 2026 01:13:11 +0000 (0:00:00.850) 0:09:01.013 ******** 2026-03-07 01:14:58.323722 | orchestrator | changed: [testbed-node-4] 2026-03-07 01:14:58.323726 | orchestrator | changed: [testbed-node-3] 2026-03-07 01:14:58.323730 | orchestrator | changed: [testbed-node-5] 2026-03-07 01:14:58.323733 | orchestrator | 2026-03-07 01:14:58.323737 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-03-07 01:14:58.323741 | orchestrator | Saturday 07 March 2026 01:13:34 +0000 (0:00:22.914) 0:09:23.927 ******** 2026-03-07 01:14:58.323745 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:14:58.323749 | orchestrator | 2026-03-07 01:14:58.323753 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-03-07 01:14:58.323757 | orchestrator | Saturday 07 March 2026 01:13:34 +0000 (0:00:00.123) 0:09:24.050 ******** 2026-03-07 01:14:58.323761 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:58.323764 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:14:58.323768 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:58.323772 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:14:58.323776 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:58.323780 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-03-07 01:14:58.323784 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-07 01:14:58.323788 | orchestrator | 2026-03-07 01:14:58.323792 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-03-07 01:14:58.323796 | orchestrator | Saturday 07 March 2026 01:13:58 +0000 (0:00:24.065) 0:09:48.115 ******** 2026-03-07 01:14:58.323799 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:14:58.323803 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:14:58.323807 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:58.323811 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:14:58.323815 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:58.323819 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:58.323822 | orchestrator | 2026-03-07 01:14:58.323826 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-03-07 01:14:58.323830 | orchestrator | Saturday 07 March 2026 01:14:09 +0000 (0:00:10.662) 0:09:58.778 ******** 2026-03-07 01:14:58.323834 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:58.323838 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:14:58.323847 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:14:58.323851 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:58.323855 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:58.323859 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2026-03-07 01:14:58.323863 | orchestrator | 2026-03-07 01:14:58.323867 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-07 01:14:58.323886 | orchestrator | Saturday 07 March 2026 01:14:15 +0000 (0:00:06.068) 0:10:04.847 ******** 2026-03-07 01:14:58.323890 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-07 01:14:58.323894 | orchestrator | 2026-03-07 01:14:58.323898 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-07 01:14:58.323902 | orchestrator | Saturday 07 March 2026 01:14:30 +0000 (0:00:15.059) 0:10:19.907 ******** 2026-03-07 01:14:58.323905 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-07 01:14:58.323909 | orchestrator | 2026-03-07 01:14:58.323913 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-03-07 01:14:58.323917 | orchestrator | Saturday 07 March 2026 01:14:31 +0000 (0:00:01.649) 0:10:21.556 ******** 2026-03-07 01:14:58.323921 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:14:58.323925 | orchestrator | 2026-03-07 01:14:58.323928 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-03-07 01:14:58.323932 | orchestrator | Saturday 07 March 2026 01:14:33 +0000 (0:00:01.531) 0:10:23.087 ******** 2026-03-07 01:14:58.323936 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-07 01:14:58.323940 | orchestrator | 2026-03-07 01:14:58.323944 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-03-07 01:14:58.323948 | orchestrator | Saturday 07 March 2026 01:14:46 +0000 (0:00:12.908) 0:10:35.995 ******** 2026-03-07 01:14:58.323952 | orchestrator | ok: [testbed-node-3] 2026-03-07 01:14:58.323955 | orchestrator | ok: [testbed-node-4] 2026-03-07 01:14:58.323959 | orchestrator | ok: [testbed-node-5] 2026-03-07 01:14:58.323963 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:14:58.323967 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:14:58.323971 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:14:58.323974 | orchestrator | 2026-03-07 01:14:58.323978 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-03-07 01:14:58.323982 | orchestrator | 2026-03-07 01:14:58.323986 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-03-07 01:14:58.323990 | orchestrator | Saturday 07 March 2026 01:14:48 +0000 (0:00:02.266) 0:10:38.262 ******** 2026-03-07 01:14:58.323994 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:14:58.323998 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:14:58.324002 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:14:58.324006 | orchestrator | 2026-03-07 01:14:58.324009 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-03-07 01:14:58.324013 | orchestrator | 2026-03-07 01:14:58.324017 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-03-07 01:14:58.324021 | orchestrator | Saturday 07 March 2026 01:14:49 +0000 (0:00:01.302) 0:10:39.565 ******** 2026-03-07 01:14:58.324024 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:58.324028 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:58.324032 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:58.324036 | orchestrator | 2026-03-07 01:14:58.324042 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-03-07 01:14:58.324046 | orchestrator | 2026-03-07 01:14:58.324053 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-03-07 01:14:58.324057 | orchestrator | Saturday 07 March 2026 01:14:50 +0000 (0:00:00.689) 0:10:40.254 ******** 2026-03-07 01:14:58.324061 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-03-07 01:14:58.324065 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-07 01:14:58.324069 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-07 01:14:58.324076 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-03-07 01:14:58.324080 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-03-07 01:14:58.324084 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-03-07 01:14:58.324088 | orchestrator | skipping: [testbed-node-3] 2026-03-07 01:14:58.324091 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-03-07 01:14:58.324095 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-07 01:14:58.324099 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-07 01:14:58.324103 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-03-07 01:14:58.324106 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-03-07 01:14:58.324110 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-03-07 01:14:58.324114 | orchestrator | skipping: [testbed-node-4] 2026-03-07 01:14:58.324118 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-03-07 01:14:58.324122 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-07 01:14:58.324126 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-07 01:14:58.324129 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-03-07 01:14:58.324133 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-03-07 01:14:58.324137 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-03-07 01:14:58.324141 | orchestrator | skipping: [testbed-node-5] 2026-03-07 01:14:58.324145 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-03-07 01:14:58.324148 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-07 01:14:58.324155 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-07 01:14:58.324160 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-03-07 01:14:58.324166 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-03-07 01:14:58.324172 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-03-07 01:14:58.324179 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:58.324185 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-03-07 01:14:58.324191 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-07 01:14:58.324197 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-07 01:14:58.324202 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-03-07 01:14:58.324206 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-03-07 01:14:58.324210 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-03-07 01:14:58.324213 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:58.324217 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-03-07 01:14:58.324221 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-07 01:14:58.324225 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-07 01:14:58.324229 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-03-07 01:14:58.324233 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-03-07 01:14:58.324236 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-03-07 01:14:58.324240 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:58.324244 | orchestrator | 2026-03-07 01:14:58.324248 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-03-07 01:14:58.324252 | orchestrator | 2026-03-07 01:14:58.324256 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-03-07 01:14:58.324259 | orchestrator | Saturday 07 March 2026 01:14:52 +0000 (0:00:01.656) 0:10:41.910 ******** 2026-03-07 01:14:58.324263 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-03-07 01:14:58.324271 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-03-07 01:14:58.324275 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:58.324279 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-03-07 01:14:58.324283 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-03-07 01:14:58.324286 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:58.324290 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-03-07 01:14:58.324294 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-03-07 01:14:58.324298 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:58.324302 | orchestrator | 2026-03-07 01:14:58.324306 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-03-07 01:14:58.324309 | orchestrator | 2026-03-07 01:14:58.324313 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-03-07 01:14:58.324317 | orchestrator | Saturday 07 March 2026 01:14:53 +0000 (0:00:00.903) 0:10:42.814 ******** 2026-03-07 01:14:58.324321 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:58.324325 | orchestrator | 2026-03-07 01:14:58.324329 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-03-07 01:14:58.324333 | orchestrator | 2026-03-07 01:14:58.324336 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-03-07 01:14:58.324340 | orchestrator | Saturday 07 March 2026 01:14:54 +0000 (0:00:00.855) 0:10:43.670 ******** 2026-03-07 01:14:58.324344 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:58.324351 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:58.324359 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:58.324363 | orchestrator | 2026-03-07 01:14:58.324367 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 01:14:58.324371 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 01:14:58.324375 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2026-03-07 01:14:58.324380 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-03-07 01:14:58.324383 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-03-07 01:14:58.324387 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-07 01:14:58.324391 | orchestrator | testbed-node-4 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-03-07 01:14:58.324395 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-03-07 01:14:58.324399 | orchestrator | 2026-03-07 01:14:58.324402 | orchestrator | 2026-03-07 01:14:58.324406 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 01:14:58.324410 | orchestrator | Saturday 07 March 2026 01:14:54 +0000 (0:00:00.745) 0:10:44.416 ******** 2026-03-07 01:14:58.324414 | orchestrator | =============================================================================== 2026-03-07 01:14:58.324418 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 39.12s 2026-03-07 01:14:58.324421 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 36.45s 2026-03-07 01:14:58.324425 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 27.84s 2026-03-07 01:14:58.324429 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 25.15s 2026-03-07 01:14:58.324433 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 24.69s 2026-03-07 01:14:58.324444 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 24.07s 2026-03-07 01:14:58.324447 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 23.07s 2026-03-07 01:14:58.324451 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 22.98s 2026-03-07 01:14:58.324455 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 22.91s 2026-03-07 01:14:58.324459 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 17.03s 2026-03-07 01:14:58.324463 | orchestrator | nova : Restart nova-api container -------------------------------------- 16.07s 2026-03-07 01:14:58.324467 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 15.79s 2026-03-07 01:14:58.324470 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 15.06s 2026-03-07 01:14:58.324474 | orchestrator | nova : Copying over nova.conf ------------------------------------------ 14.39s 2026-03-07 01:14:58.324478 | orchestrator | nova-cell : Create cell ------------------------------------------------ 13.98s 2026-03-07 01:14:58.324482 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 12.91s 2026-03-07 01:14:58.324486 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.16s 2026-03-07 01:14:58.324489 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 10.66s 2026-03-07 01:14:58.324493 | orchestrator | nova-cell : Copying files for nova-ssh --------------------------------- 10.52s 2026-03-07 01:14:58.324497 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 9.10s 2026-03-07 01:14:58.324501 | orchestrator | 2026-03-07 01:14:58 | INFO  | Task 2e1dd76f-d9d6-44a4-abb8-becf57ce4644 is in state SUCCESS 2026-03-07 01:14:58.324505 | orchestrator | 2026-03-07 01:14:58.324508 | orchestrator | 2026-03-07 01:14:58.324512 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-07 01:14:58.324516 | orchestrator | 2026-03-07 01:14:58.324520 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-07 01:14:58.324524 | orchestrator | Saturday 07 March 2026 01:12:18 +0000 (0:00:00.342) 0:00:00.342 ******** 2026-03-07 01:14:58.324527 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:14:58.324531 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:14:58.324535 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:14:58.324539 | orchestrator | 2026-03-07 01:14:58.324543 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-07 01:14:58.324546 | orchestrator | Saturday 07 March 2026 01:12:18 +0000 (0:00:00.338) 0:00:00.681 ******** 2026-03-07 01:14:58.324550 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-03-07 01:14:58.324554 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-03-07 01:14:58.324558 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-03-07 01:14:58.324562 | orchestrator | 2026-03-07 01:14:58.324566 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-03-07 01:14:58.324570 | orchestrator | 2026-03-07 01:14:58.324577 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-07 01:14:58.324583 | orchestrator | Saturday 07 March 2026 01:12:19 +0000 (0:00:00.496) 0:00:01.177 ******** 2026-03-07 01:14:58.324587 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:14:58.324591 | orchestrator | 2026-03-07 01:14:58.324595 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-03-07 01:14:58.324599 | orchestrator | Saturday 07 March 2026 01:12:19 +0000 (0:00:00.654) 0:00:01.832 ******** 2026-03-07 01:14:58.324603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-07 01:14:58.324616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-07 01:14:58.324620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-07 01:14:58.324624 | orchestrator | 2026-03-07 01:14:58.324628 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-03-07 01:14:58.324632 | orchestrator | Saturday 07 March 2026 01:12:20 +0000 (0:00:00.903) 0:00:02.736 ******** 2026-03-07 01:14:58.324636 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-03-07 01:14:58.324639 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-03-07 01:14:58.324643 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-07 01:14:58.324647 | orchestrator | 2026-03-07 01:14:58.324651 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-07 01:14:58.324655 | orchestrator | Saturday 07 March 2026 01:12:21 +0000 (0:00:00.929) 0:00:03.666 ******** 2026-03-07 01:14:58.324659 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:14:58.324663 | orchestrator | 2026-03-07 01:14:58.324667 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-03-07 01:14:58.324670 | orchestrator | Saturday 07 March 2026 01:12:22 +0000 (0:00:00.739) 0:00:04.405 ******** 2026-03-07 01:14:58.324674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-07 01:14:58.324687 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-07 01:14:58.324694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-07 01:14:58.324698 | orchestrator | 2026-03-07 01:14:58.324702 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-03-07 01:14:58.324706 | orchestrator | Saturday 07 March 2026 01:12:23 +0000 (0:00:01.326) 0:00:05.731 ******** 2026-03-07 01:14:58.324710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-07 01:14:58.324714 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:58.324718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-07 01:14:58.324722 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:58.324726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-07 01:14:58.324730 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:58.324734 | orchestrator | 2026-03-07 01:14:58.324738 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-03-07 01:14:58.324741 | orchestrator | Saturday 07 March 2026 01:12:24 +0000 (0:00:00.421) 0:00:06.153 ******** 2026-03-07 01:14:58.324753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-07 01:14:58.324760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-07 01:14:58.324764 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:58.324768 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:58.324772 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-07 01:14:58.324776 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:58.324780 | orchestrator | 2026-03-07 01:14:58.324784 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-03-07 01:14:58.324788 | orchestrator | Saturday 07 March 2026 01:12:25 +0000 (0:00:00.902) 0:00:07.055 ******** 2026-03-07 01:14:58.324792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-07 01:14:58.325043 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-07 01:14:58.325057 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-07 01:14:58.325066 | orchestrator | 2026-03-07 01:14:58.325071 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-03-07 01:14:58.325078 | orchestrator | Saturday 07 March 2026 01:12:26 +0000 (0:00:01.205) 0:00:08.261 ******** 2026-03-07 01:14:58.325083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-07 01:14:58.325087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-07 01:14:58.325091 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-07 01:14:58.325095 | orchestrator | 2026-03-07 01:14:58.325098 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-03-07 01:14:58.325102 | orchestrator | Saturday 07 March 2026 01:12:27 +0000 (0:00:01.509) 0:00:09.771 ******** 2026-03-07 01:14:58.325106 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:58.325110 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:58.325114 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:58.325118 | orchestrator | 2026-03-07 01:14:58.325122 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-03-07 01:14:58.325125 | orchestrator | Saturday 07 March 2026 01:12:28 +0000 (0:00:00.560) 0:00:10.331 ******** 2026-03-07 01:14:58.325129 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-07 01:14:58.325133 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-07 01:14:58.325137 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-07 01:14:58.325141 | orchestrator | 2026-03-07 01:14:58.325145 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-03-07 01:14:58.325148 | orchestrator | Saturday 07 March 2026 01:12:29 +0000 (0:00:01.233) 0:00:11.565 ******** 2026-03-07 01:14:58.325156 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-07 01:14:58.325164 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-07 01:14:58.325168 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-07 01:14:58.325172 | orchestrator | 2026-03-07 01:14:58.325176 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-03-07 01:14:58.325180 | orchestrator | Saturday 07 March 2026 01:12:31 +0000 (0:00:01.665) 0:00:13.230 ******** 2026-03-07 01:14:58.325183 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-07 01:14:58.325187 | orchestrator | 2026-03-07 01:14:58.325191 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-03-07 01:14:58.325195 | orchestrator | Saturday 07 March 2026 01:12:32 +0000 (0:00:01.121) 0:00:14.352 ******** 2026-03-07 01:14:58.325199 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-03-07 01:14:58.325202 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-03-07 01:14:58.325206 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:14:58.325210 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:14:58.325214 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:14:58.325218 | orchestrator | 2026-03-07 01:14:58.325221 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-03-07 01:14:58.325230 | orchestrator | Saturday 07 March 2026 01:12:33 +0000 (0:00:01.158) 0:00:15.511 ******** 2026-03-07 01:14:58.325234 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:58.325237 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:58.325241 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:58.325245 | orchestrator | 2026-03-07 01:14:58.325249 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-03-07 01:14:58.325253 | orchestrator | Saturday 07 March 2026 01:12:34 +0000 (0:00:01.255) 0:00:16.767 ******** 2026-03-07 01:14:58.325257 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1094126, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.3989735, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1094126, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.3989735, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1094126, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.3989735, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1094172, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4100144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1094172, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4100144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1094172, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4100144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325293 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1094136, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.400671, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1094136, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.400671, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1094136, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.400671, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1094173, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4120226, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325315 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1094173, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4120226, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1094173, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4120226, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325326 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1094146, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4034915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1094146, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4034915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1094146, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4034915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325338 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1094160, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4080296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1094160, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4080296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1094160, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4080296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1094124, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.3976727, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1094124, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.3976727, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325367 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1094124, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.3976727, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1094131, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.3995826, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325378 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1094131, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.3995826, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325385 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1094131, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.3995826, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1094137, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.401458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1094137, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.401458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1094137, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.401458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1094150, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4055047, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1094150, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4055047, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1094150, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4055047, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1094169, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4095883, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1094169, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4095883, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1094169, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4095883, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325435 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1094134, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.3995826, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1094134, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.3995826, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1094134, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.3995826, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1094156, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4070592, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1094156, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4070592, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1094156, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4070592, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1094147, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4044914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1094147, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4044914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1094147, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4044914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1094145, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4034915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325488 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1094145, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4034915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325495 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1094145, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4034915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1094143, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4034727, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1094143, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4034727, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325512 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1094143, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4034727, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1094152, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.406391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1094152, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.406391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1094152, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.406391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1094139, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4024916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325543 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1094139, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4024916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325547 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1094139, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4024916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325551 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1094165, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4090652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1094165, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4090652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1094165, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4090652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1094391, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4419677, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1094391, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4419677, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1094391, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4419677, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1094210, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.421557, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1094210, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.421557, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1094210, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.421557, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325603 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1094193, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4157379, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1094193, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4157379, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1094193, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4157379, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325619 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1094251, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.425927, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1094251, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.425927, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1094251, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.425927, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1094180, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4126792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1094180, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4126792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1094180, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4126792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1094322, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.325656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1094322, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.326129 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1094322, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.326156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1094260, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4316278, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.326168 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1094260, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4316278, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.326172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1094260, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4316278, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.326176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1094338, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4363375, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.326180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1094338, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4363375, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.326190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1094338, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4363375, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.326197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1094380, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4405558, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.326205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1094380, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4405558, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.326209 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1094380, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4405558, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.326213 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1094315, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4340358, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.326218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1094315, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4340358, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.326225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1094315, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4340358, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.326233 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1094243, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.424343, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.326241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1094243, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.424343, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.326245 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1094243, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.424343, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.326249 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1094203, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.418124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.326253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1094203, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.418124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.326258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1094203, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.418124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.326263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1094234, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4229, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.326276 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1094234, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4229, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.326280 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1094234, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4229, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.326284 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1094195, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4171638, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.326288 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1094195, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4171638, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.326292 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1094195, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4171638, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.326298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1094247, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4247463, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.326308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1094247, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4247463, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.326312 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1094247, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4247463, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.326316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1094362, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4400642, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.326320 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1094362, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4400642, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.326324 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1094362, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4400642, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.326331 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1094352, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4380438, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.326341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1094352, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4380438, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.326345 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1094352, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4380438, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.326349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1094182, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4141912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.326353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1094182, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4141912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.326357 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1094182, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4141912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.326364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1094187, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4149127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.326374 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1094187, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4149127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.326378 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1094187, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4149127, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.326382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1094300, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4331777, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.326387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1094300, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4331777, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.326391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1094300, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4331777, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.326398 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1094345, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4365232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.326408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1094345, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4365232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.326412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1094345, 'dev': 116, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772842719.4365232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-07 01:14:58.326416 | orchestrator | 2026-03-07 01:14:58.326420 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-03-07 01:14:58.326424 | orchestrator | Saturday 07 March 2026 01:13:16 +0000 (0:00:42.072) 0:00:58.840 ******** 2026-03-07 01:14:58.326428 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-07 01:14:58.326432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-07 01:14:58.326436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-07 01:14:58.326443 | orchestrator | 2026-03-07 01:14:58.326447 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-03-07 01:14:58.326453 | orchestrator | Saturday 07 March 2026 01:13:18 +0000 (0:00:01.324) 0:01:00.165 ******** 2026-03-07 01:14:58.326459 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:14:58.326465 | orchestrator | 2026-03-07 01:14:58.326471 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-03-07 01:14:58.326480 | orchestrator | Saturday 07 March 2026 01:13:20 +0000 (0:00:02.738) 0:01:02.903 ******** 2026-03-07 01:14:58.326486 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:14:58.326491 | orchestrator | 2026-03-07 01:14:58.326497 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-07 01:14:58.326503 | orchestrator | Saturday 07 March 2026 01:13:23 +0000 (0:00:02.609) 0:01:05.513 ******** 2026-03-07 01:14:58.326509 | orchestrator | 2026-03-07 01:14:58.326515 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-07 01:14:58.326522 | orchestrator | Saturday 07 March 2026 01:13:23 +0000 (0:00:00.075) 0:01:05.588 ******** 2026-03-07 01:14:58.326528 | orchestrator | 2026-03-07 01:14:58.326532 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-07 01:14:58.326536 | orchestrator | Saturday 07 March 2026 01:13:23 +0000 (0:00:00.077) 0:01:05.665 ******** 2026-03-07 01:14:58.326540 | orchestrator | 2026-03-07 01:14:58.326544 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-03-07 01:14:58.326547 | orchestrator | Saturday 07 March 2026 01:13:23 +0000 (0:00:00.271) 0:01:05.937 ******** 2026-03-07 01:14:58.326551 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:58.326555 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:58.326559 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:14:58.326562 | orchestrator | 2026-03-07 01:14:58.326571 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-03-07 01:14:58.326575 | orchestrator | Saturday 07 March 2026 01:13:25 +0000 (0:00:02.010) 0:01:07.948 ******** 2026-03-07 01:14:58.326579 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:58.326583 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:58.326587 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-03-07 01:14:58.326591 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-03-07 01:14:58.326595 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-03-07 01:14:58.326598 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (9 retries left). 2026-03-07 01:14:58.326602 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:14:58.326606 | orchestrator | 2026-03-07 01:14:58.326610 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-03-07 01:14:58.326614 | orchestrator | Saturday 07 March 2026 01:14:17 +0000 (0:00:51.610) 0:01:59.558 ******** 2026-03-07 01:14:58.326618 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:58.326622 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:14:58.326626 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:14:58.326629 | orchestrator | 2026-03-07 01:14:58.326634 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-03-07 01:14:58.326637 | orchestrator | Saturday 07 March 2026 01:14:51 +0000 (0:00:34.086) 0:02:33.645 ******** 2026-03-07 01:14:58.326641 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:14:58.326645 | orchestrator | 2026-03-07 01:14:58.326649 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-03-07 01:14:58.326653 | orchestrator | Saturday 07 March 2026 01:14:54 +0000 (0:00:02.444) 0:02:36.089 ******** 2026-03-07 01:14:58.326656 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:58.326660 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:14:58.326669 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:14:58.326673 | orchestrator | 2026-03-07 01:14:58.326677 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-03-07 01:14:58.326681 | orchestrator | Saturday 07 March 2026 01:14:54 +0000 (0:00:00.819) 0:02:36.909 ******** 2026-03-07 01:14:58.326686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-03-07 01:14:58.326691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-03-07 01:14:58.326698 | orchestrator | 2026-03-07 01:14:58.326703 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-03-07 01:14:58.326709 | orchestrator | Saturday 07 March 2026 01:14:57 +0000 (0:00:02.624) 0:02:39.534 ******** 2026-03-07 01:14:58.326721 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:14:58.326730 | orchestrator | 2026-03-07 01:14:58.326735 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 01:14:58.326741 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-07 01:14:58.326748 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-07 01:14:58.326754 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-07 01:14:58.326760 | orchestrator | 2026-03-07 01:14:58.326766 | orchestrator | 2026-03-07 01:14:58.326772 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 01:14:58.326779 | orchestrator | Saturday 07 March 2026 01:14:57 +0000 (0:00:00.282) 0:02:39.816 ******** 2026-03-07 01:14:58.326790 | orchestrator | =============================================================================== 2026-03-07 01:14:58.326796 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 51.61s 2026-03-07 01:14:58.326801 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 42.07s 2026-03-07 01:14:58.326806 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 34.09s 2026-03-07 01:14:58.326812 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.74s 2026-03-07 01:14:58.326819 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.62s 2026-03-07 01:14:58.326825 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.61s 2026-03-07 01:14:58.326832 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.44s 2026-03-07 01:14:58.326838 | orchestrator | grafana : Restart first grafana container ------------------------------- 2.01s 2026-03-07 01:14:58.326844 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.67s 2026-03-07 01:14:58.326850 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.51s 2026-03-07 01:14:58.326862 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.33s 2026-03-07 01:14:58.326906 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.32s 2026-03-07 01:14:58.326913 | orchestrator | grafana : Prune templated Grafana dashboards ---------------------------- 1.26s 2026-03-07 01:14:58.326918 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.23s 2026-03-07 01:14:58.326923 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.21s 2026-03-07 01:14:58.326932 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 1.16s 2026-03-07 01:14:58.326937 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 1.12s 2026-03-07 01:14:58.326941 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.93s 2026-03-07 01:14:58.326946 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.90s 2026-03-07 01:14:58.326950 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.90s 2026-03-07 01:14:58.326955 | orchestrator | 2026-03-07 01:14:58 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:14:58.326960 | orchestrator | 2026-03-07 01:14:58 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:15:01.361438 | orchestrator | 2026-03-07 01:15:01 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:15:01.361535 | orchestrator | 2026-03-07 01:15:01 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:15:04.395522 | orchestrator | 2026-03-07 01:15:04 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:15:04.395638 | orchestrator | 2026-03-07 01:15:04 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:15:07.437891 | orchestrator | 2026-03-07 01:15:07 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:15:07.437949 | orchestrator | 2026-03-07 01:15:07 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:15:10.489089 | orchestrator | 2026-03-07 01:15:10 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:15:10.489152 | orchestrator | 2026-03-07 01:15:10 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:15:13.535950 | orchestrator | 2026-03-07 01:15:13 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:15:13.536025 | orchestrator | 2026-03-07 01:15:13 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:15:16.575451 | orchestrator | 2026-03-07 01:15:16 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:15:16.575553 | orchestrator | 2026-03-07 01:15:16 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:15:19.626615 | orchestrator | 2026-03-07 01:15:19 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:15:19.626695 | orchestrator | 2026-03-07 01:15:19 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:15:22.678130 | orchestrator | 2026-03-07 01:15:22 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:15:22.678206 | orchestrator | 2026-03-07 01:15:22 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:15:25.717073 | orchestrator | 2026-03-07 01:15:25 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:15:25.717162 | orchestrator | 2026-03-07 01:15:25 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:15:28.751038 | orchestrator | 2026-03-07 01:15:28 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:15:28.751093 | orchestrator | 2026-03-07 01:15:28 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:15:31.794619 | orchestrator | 2026-03-07 01:15:31 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:15:31.794712 | orchestrator | 2026-03-07 01:15:31 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:15:34.833315 | orchestrator | 2026-03-07 01:15:34 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:15:34.833393 | orchestrator | 2026-03-07 01:15:34 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:15:37.874567 | orchestrator | 2026-03-07 01:15:37 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:15:37.874640 | orchestrator | 2026-03-07 01:15:37 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:15:40.914481 | orchestrator | 2026-03-07 01:15:40 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:15:40.914590 | orchestrator | 2026-03-07 01:15:40 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:15:43.956091 | orchestrator | 2026-03-07 01:15:43 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:15:43.956195 | orchestrator | 2026-03-07 01:15:43 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:15:47.000175 | orchestrator | 2026-03-07 01:15:46 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:15:47.000300 | orchestrator | 2026-03-07 01:15:46 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:15:50.055381 | orchestrator | 2026-03-07 01:15:50 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:15:50.055501 | orchestrator | 2026-03-07 01:15:50 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:15:53.094054 | orchestrator | 2026-03-07 01:15:53 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:15:53.094135 | orchestrator | 2026-03-07 01:15:53 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:15:56.134158 | orchestrator | 2026-03-07 01:15:56 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:15:56.134252 | orchestrator | 2026-03-07 01:15:56 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:15:59.182271 | orchestrator | 2026-03-07 01:15:59 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:15:59.182358 | orchestrator | 2026-03-07 01:15:59 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:16:02.221117 | orchestrator | 2026-03-07 01:16:02 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:16:02.221217 | orchestrator | 2026-03-07 01:16:02 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:16:05.274208 | orchestrator | 2026-03-07 01:16:05 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:16:05.274318 | orchestrator | 2026-03-07 01:16:05 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:16:08.323588 | orchestrator | 2026-03-07 01:16:08 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:16:08.323675 | orchestrator | 2026-03-07 01:16:08 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:16:11.365190 | orchestrator | 2026-03-07 01:16:11 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:16:11.365335 | orchestrator | 2026-03-07 01:16:11 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:16:14.400544 | orchestrator | 2026-03-07 01:16:14 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:16:14.400649 | orchestrator | 2026-03-07 01:16:14 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:16:17.440642 | orchestrator | 2026-03-07 01:16:17 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:16:17.440780 | orchestrator | 2026-03-07 01:16:17 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:16:20.475359 | orchestrator | 2026-03-07 01:16:20 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:16:20.475444 | orchestrator | 2026-03-07 01:16:20 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:16:23.522326 | orchestrator | 2026-03-07 01:16:23 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:16:23.522426 | orchestrator | 2026-03-07 01:16:23 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:16:26.573181 | orchestrator | 2026-03-07 01:16:26 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:16:26.573297 | orchestrator | 2026-03-07 01:16:26 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:16:29.615791 | orchestrator | 2026-03-07 01:16:29 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:16:29.615891 | orchestrator | 2026-03-07 01:16:29 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:16:32.652266 | orchestrator | 2026-03-07 01:16:32 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:16:32.652351 | orchestrator | 2026-03-07 01:16:32 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:16:35.679480 | orchestrator | 2026-03-07 01:16:35 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:16:35.679566 | orchestrator | 2026-03-07 01:16:35 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:16:38.713074 | orchestrator | 2026-03-07 01:16:38 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:16:38.713163 | orchestrator | 2026-03-07 01:16:38 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:16:41.745842 | orchestrator | 2026-03-07 01:16:41 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:16:41.745951 | orchestrator | 2026-03-07 01:16:41 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:16:44.777524 | orchestrator | 2026-03-07 01:16:44 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:16:44.777628 | orchestrator | 2026-03-07 01:16:44 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:16:47.812529 | orchestrator | 2026-03-07 01:16:47 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:16:47.812621 | orchestrator | 2026-03-07 01:16:47 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:16:50.846259 | orchestrator | 2026-03-07 01:16:50 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:16:50.846333 | orchestrator | 2026-03-07 01:16:50 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:16:53.875830 | orchestrator | 2026-03-07 01:16:53 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:16:53.875947 | orchestrator | 2026-03-07 01:16:53 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:16:56.910792 | orchestrator | 2026-03-07 01:16:56 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:16:56.910915 | orchestrator | 2026-03-07 01:16:56 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:16:59.949531 | orchestrator | 2026-03-07 01:16:59 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:16:59.949698 | orchestrator | 2026-03-07 01:16:59 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:17:02.982277 | orchestrator | 2026-03-07 01:17:02 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:17:02.982405 | orchestrator | 2026-03-07 01:17:02 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:17:06.022452 | orchestrator | 2026-03-07 01:17:06 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:17:06.022543 | orchestrator | 2026-03-07 01:17:06 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:17:09.065714 | orchestrator | 2026-03-07 01:17:09 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:17:09.065818 | orchestrator | 2026-03-07 01:17:09 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:17:12.101734 | orchestrator | 2026-03-07 01:17:12 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:17:12.101923 | orchestrator | 2026-03-07 01:17:12 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:17:15.151381 | orchestrator | 2026-03-07 01:17:15 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:17:15.151463 | orchestrator | 2026-03-07 01:17:15 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:17:18.182348 | orchestrator | 2026-03-07 01:17:18 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:17:18.182475 | orchestrator | 2026-03-07 01:17:18 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:17:21.212483 | orchestrator | 2026-03-07 01:17:21 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:17:21.212614 | orchestrator | 2026-03-07 01:17:21 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:17:24.252025 | orchestrator | 2026-03-07 01:17:24 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:17:24.252108 | orchestrator | 2026-03-07 01:17:24 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:17:27.286961 | orchestrator | 2026-03-07 01:17:27 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:17:27.287066 | orchestrator | 2026-03-07 01:17:27 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:17:30.321377 | orchestrator | 2026-03-07 01:17:30 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:17:30.321466 | orchestrator | 2026-03-07 01:17:30 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:17:33.360055 | orchestrator | 2026-03-07 01:17:33 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:17:33.360174 | orchestrator | 2026-03-07 01:17:33 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:17:36.396273 | orchestrator | 2026-03-07 01:17:36 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:17:36.396374 | orchestrator | 2026-03-07 01:17:36 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:17:39.440168 | orchestrator | 2026-03-07 01:17:39 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:17:39.440245 | orchestrator | 2026-03-07 01:17:39 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:17:42.479214 | orchestrator | 2026-03-07 01:17:42 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:17:42.479311 | orchestrator | 2026-03-07 01:17:42 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:17:45.525921 | orchestrator | 2026-03-07 01:17:45 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:17:45.526058 | orchestrator | 2026-03-07 01:17:45 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:17:48.568474 | orchestrator | 2026-03-07 01:17:48 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:17:48.568725 | orchestrator | 2026-03-07 01:17:48 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:17:51.599331 | orchestrator | 2026-03-07 01:17:51 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:17:51.599427 | orchestrator | 2026-03-07 01:17:51 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:17:54.638221 | orchestrator | 2026-03-07 01:17:54 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:17:54.638303 | orchestrator | 2026-03-07 01:17:54 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:17:57.673345 | orchestrator | 2026-03-07 01:17:57 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state STARTED 2026-03-07 01:17:57.673435 | orchestrator | 2026-03-07 01:17:57 | INFO  | Wait 1 second(s) until the next check 2026-03-07 01:18:00.702866 | orchestrator | 2026-03-07 01:18:00 | INFO  | Task 19c99daf-1670-4b09-947c-220806e8b65d is in state SUCCESS 2026-03-07 01:18:00.703854 | orchestrator | 2026-03-07 01:18:00.703896 | orchestrator | 2026-03-07 01:18:00.703906 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-07 01:18:00.703915 | orchestrator | 2026-03-07 01:18:00.703922 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-07 01:18:00.703930 | orchestrator | Saturday 07 March 2026 01:12:50 +0000 (0:00:00.375) 0:00:00.375 ******** 2026-03-07 01:18:00.703937 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:18:00.703946 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:18:00.703952 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:18:00.703959 | orchestrator | 2026-03-07 01:18:00.703966 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-07 01:18:00.703972 | orchestrator | Saturday 07 March 2026 01:12:51 +0000 (0:00:00.450) 0:00:00.826 ******** 2026-03-07 01:18:00.703979 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-03-07 01:18:00.703987 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-03-07 01:18:00.703994 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-03-07 01:18:00.704006 | orchestrator | 2026-03-07 01:18:00.704013 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-03-07 01:18:00.704020 | orchestrator | 2026-03-07 01:18:00.704027 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-07 01:18:00.704034 | orchestrator | Saturday 07 March 2026 01:12:51 +0000 (0:00:00.575) 0:00:01.402 ******** 2026-03-07 01:18:00.704040 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:18:00.704048 | orchestrator | 2026-03-07 01:18:00.704055 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-03-07 01:18:00.704062 | orchestrator | Saturday 07 March 2026 01:12:52 +0000 (0:00:00.617) 0:00:02.019 ******** 2026-03-07 01:18:00.704070 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-03-07 01:18:00.704076 | orchestrator | 2026-03-07 01:18:00.704082 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-03-07 01:18:00.704087 | orchestrator | Saturday 07 March 2026 01:12:56 +0000 (0:00:04.371) 0:00:06.391 ******** 2026-03-07 01:18:00.704093 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-03-07 01:18:00.704099 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-03-07 01:18:00.704105 | orchestrator | 2026-03-07 01:18:00.704111 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-03-07 01:18:00.704117 | orchestrator | Saturday 07 March 2026 01:13:04 +0000 (0:00:07.421) 0:00:13.813 ******** 2026-03-07 01:18:00.704124 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-07 01:18:00.704130 | orchestrator | 2026-03-07 01:18:00.704137 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-03-07 01:18:00.704143 | orchestrator | Saturday 07 March 2026 01:13:07 +0000 (0:00:03.683) 0:00:17.496 ******** 2026-03-07 01:18:00.704197 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-07 01:18:00.704203 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-07 01:18:00.704210 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-07 01:18:00.704240 | orchestrator | 2026-03-07 01:18:00.704315 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-03-07 01:18:00.704324 | orchestrator | Saturday 07 March 2026 01:13:16 +0000 (0:00:09.039) 0:00:26.536 ******** 2026-03-07 01:18:00.704333 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-07 01:18:00.704339 | orchestrator | 2026-03-07 01:18:00.704358 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-03-07 01:18:00.704365 | orchestrator | Saturday 07 March 2026 01:13:20 +0000 (0:00:03.806) 0:00:30.343 ******** 2026-03-07 01:18:00.704371 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-07 01:18:00.704378 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-07 01:18:00.704385 | orchestrator | 2026-03-07 01:18:00.704391 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-03-07 01:18:00.704397 | orchestrator | Saturday 07 March 2026 01:13:28 +0000 (0:00:07.868) 0:00:38.212 ******** 2026-03-07 01:18:00.704404 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-03-07 01:18:00.704410 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-03-07 01:18:00.704416 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-03-07 01:18:00.704422 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-03-07 01:18:00.704428 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-03-07 01:18:00.704435 | orchestrator | 2026-03-07 01:18:00.704441 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-07 01:18:00.704447 | orchestrator | Saturday 07 March 2026 01:13:46 +0000 (0:00:18.031) 0:00:56.243 ******** 2026-03-07 01:18:00.704453 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:18:00.704460 | orchestrator | 2026-03-07 01:18:00.704488 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-03-07 01:18:00.704495 | orchestrator | Saturday 07 March 2026 01:13:47 +0000 (0:00:00.583) 0:00:56.826 ******** 2026-03-07 01:18:00.704501 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:18:00.704509 | orchestrator | 2026-03-07 01:18:00.704516 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-03-07 01:18:00.704523 | orchestrator | Saturday 07 March 2026 01:13:52 +0000 (0:00:05.715) 0:01:02.542 ******** 2026-03-07 01:18:00.704530 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:18:00.704536 | orchestrator | 2026-03-07 01:18:00.704542 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-07 01:18:00.704562 | orchestrator | Saturday 07 March 2026 01:13:57 +0000 (0:00:04.672) 0:01:07.215 ******** 2026-03-07 01:18:00.704569 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:18:00.704609 | orchestrator | 2026-03-07 01:18:00.704615 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-03-07 01:18:00.704621 | orchestrator | Saturday 07 March 2026 01:14:01 +0000 (0:00:03.609) 0:01:10.824 ******** 2026-03-07 01:18:00.704627 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-07 01:18:00.704634 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-07 01:18:00.704641 | orchestrator | 2026-03-07 01:18:00.704647 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-03-07 01:18:00.704653 | orchestrator | Saturday 07 March 2026 01:14:12 +0000 (0:00:10.967) 0:01:21.791 ******** 2026-03-07 01:18:00.704659 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-03-07 01:18:00.704666 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-03-07 01:18:00.704674 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-03-07 01:18:00.704693 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-03-07 01:18:00.704700 | orchestrator | 2026-03-07 01:18:00.704708 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-03-07 01:18:00.704715 | orchestrator | Saturday 07 March 2026 01:14:30 +0000 (0:00:18.184) 0:01:39.976 ******** 2026-03-07 01:18:00.704723 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:18:00.704731 | orchestrator | 2026-03-07 01:18:00.704736 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-03-07 01:18:00.704743 | orchestrator | Saturday 07 March 2026 01:14:35 +0000 (0:00:04.933) 0:01:44.909 ******** 2026-03-07 01:18:00.704748 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:18:00.704754 | orchestrator | 2026-03-07 01:18:00.704760 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-03-07 01:18:00.704766 | orchestrator | Saturday 07 March 2026 01:14:41 +0000 (0:00:05.890) 0:01:50.799 ******** 2026-03-07 01:18:00.704773 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:18:00.704779 | orchestrator | 2026-03-07 01:18:00.704785 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-03-07 01:18:00.704802 | orchestrator | Saturday 07 March 2026 01:14:41 +0000 (0:00:00.235) 0:01:51.035 ******** 2026-03-07 01:18:00.704810 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:18:00.704816 | orchestrator | 2026-03-07 01:18:00.704822 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-07 01:18:00.704829 | orchestrator | Saturday 07 March 2026 01:14:46 +0000 (0:00:05.076) 0:01:56.112 ******** 2026-03-07 01:18:00.704836 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:18:00.704844 | orchestrator | 2026-03-07 01:18:00.704851 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-03-07 01:18:00.704857 | orchestrator | Saturday 07 March 2026 01:14:47 +0000 (0:00:01.231) 0:01:57.343 ******** 2026-03-07 01:18:00.704863 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:18:00.704869 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:18:00.704876 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:18:00.704882 | orchestrator | 2026-03-07 01:18:00.704888 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-03-07 01:18:00.704901 | orchestrator | Saturday 07 March 2026 01:14:54 +0000 (0:00:06.222) 0:02:03.565 ******** 2026-03-07 01:18:00.704954 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:18:00.704962 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:18:00.704969 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:18:00.704976 | orchestrator | 2026-03-07 01:18:00.704982 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-03-07 01:18:00.704988 | orchestrator | Saturday 07 March 2026 01:14:59 +0000 (0:00:05.532) 0:02:09.098 ******** 2026-03-07 01:18:00.704995 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:18:00.705002 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:18:00.705009 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:18:00.705016 | orchestrator | 2026-03-07 01:18:00.705023 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-03-07 01:18:00.705030 | orchestrator | Saturday 07 March 2026 01:15:00 +0000 (0:00:00.881) 0:02:09.980 ******** 2026-03-07 01:18:00.705036 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:18:00.705043 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:18:00.705049 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:18:00.705057 | orchestrator | 2026-03-07 01:18:00.705064 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-03-07 01:18:00.705071 | orchestrator | Saturday 07 March 2026 01:15:02 +0000 (0:00:02.404) 0:02:12.384 ******** 2026-03-07 01:18:00.705078 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:18:00.705085 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:18:00.705091 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:18:00.705106 | orchestrator | 2026-03-07 01:18:00.705114 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-03-07 01:18:00.705121 | orchestrator | Saturday 07 March 2026 01:15:04 +0000 (0:00:01.638) 0:02:14.022 ******** 2026-03-07 01:18:00.705127 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:18:00.705134 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:18:00.705142 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:18:00.705149 | orchestrator | 2026-03-07 01:18:00.705156 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-03-07 01:18:00.705163 | orchestrator | Saturday 07 March 2026 01:15:05 +0000 (0:00:01.366) 0:02:15.389 ******** 2026-03-07 01:18:00.705169 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:18:00.705176 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:18:00.705183 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:18:00.705190 | orchestrator | 2026-03-07 01:18:00.705206 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-03-07 01:18:00.705214 | orchestrator | Saturday 07 March 2026 01:15:07 +0000 (0:00:02.105) 0:02:17.494 ******** 2026-03-07 01:18:00.705220 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:18:00.705227 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:18:00.705233 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:18:00.705240 | orchestrator | 2026-03-07 01:18:00.705247 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-03-07 01:18:00.705254 | orchestrator | Saturday 07 March 2026 01:15:09 +0000 (0:00:01.798) 0:02:19.293 ******** 2026-03-07 01:18:00.705261 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:18:00.705268 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:18:00.705275 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:18:00.705282 | orchestrator | 2026-03-07 01:18:00.705289 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-03-07 01:18:00.705296 | orchestrator | Saturday 07 March 2026 01:15:10 +0000 (0:00:00.641) 0:02:19.934 ******** 2026-03-07 01:18:00.705303 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:18:00.705309 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:18:00.705316 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:18:00.705323 | orchestrator | 2026-03-07 01:18:00.705330 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-07 01:18:00.705337 | orchestrator | Saturday 07 March 2026 01:15:13 +0000 (0:00:02.668) 0:02:22.603 ******** 2026-03-07 01:18:00.705344 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:18:00.705351 | orchestrator | 2026-03-07 01:18:00.705357 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-03-07 01:18:00.705364 | orchestrator | Saturday 07 March 2026 01:15:13 +0000 (0:00:00.812) 0:02:23.416 ******** 2026-03-07 01:18:00.705371 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:18:00.705378 | orchestrator | 2026-03-07 01:18:00.705385 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-07 01:18:00.705392 | orchestrator | Saturday 07 March 2026 01:15:17 +0000 (0:00:04.048) 0:02:27.464 ******** 2026-03-07 01:18:00.705400 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:18:00.705406 | orchestrator | 2026-03-07 01:18:00.705412 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-03-07 01:18:00.705419 | orchestrator | Saturday 07 March 2026 01:15:21 +0000 (0:00:03.848) 0:02:31.313 ******** 2026-03-07 01:18:00.705426 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-07 01:18:00.705433 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-07 01:18:00.705440 | orchestrator | 2026-03-07 01:18:00.705447 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-03-07 01:18:00.705455 | orchestrator | Saturday 07 March 2026 01:15:30 +0000 (0:00:08.247) 0:02:39.560 ******** 2026-03-07 01:18:00.705461 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:18:00.705532 | orchestrator | 2026-03-07 01:18:00.705540 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-03-07 01:18:00.705554 | orchestrator | Saturday 07 March 2026 01:15:34 +0000 (0:00:04.473) 0:02:44.034 ******** 2026-03-07 01:18:00.705562 | orchestrator | ok: [testbed-node-0] 2026-03-07 01:18:00.705569 | orchestrator | ok: [testbed-node-1] 2026-03-07 01:18:00.705576 | orchestrator | ok: [testbed-node-2] 2026-03-07 01:18:00.705583 | orchestrator | 2026-03-07 01:18:00.705590 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-03-07 01:18:00.705597 | orchestrator | Saturday 07 March 2026 01:15:34 +0000 (0:00:00.362) 0:02:44.396 ******** 2026-03-07 01:18:00.705612 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-07 01:18:00.705631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-07 01:18:00.705640 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-07 01:18:00.705648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-07 01:18:00.705657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-07 01:18:00.705712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-07 01:18:00.705722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-07 01:18:00.705730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-07 01:18:00.705743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-07 01:18:00.705750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-07 01:18:00.705757 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-07 01:18:00.705769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-07 01:18:00.705781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:18:00.705790 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:18:00.705801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:18:00.705809 | orchestrator | 2026-03-07 01:18:00.705816 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-03-07 01:18:00.705823 | orchestrator | Saturday 07 March 2026 01:15:37 +0000 (0:00:02.724) 0:02:47.121 ******** 2026-03-07 01:18:00.705830 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:18:00.705837 | orchestrator | 2026-03-07 01:18:00.705844 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-03-07 01:18:00.705852 | orchestrator | Saturday 07 March 2026 01:15:37 +0000 (0:00:00.136) 0:02:47.258 ******** 2026-03-07 01:18:00.705858 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:18:00.705865 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:18:00.705872 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:18:00.705879 | orchestrator | 2026-03-07 01:18:00.705886 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-03-07 01:18:00.705893 | orchestrator | Saturday 07 March 2026 01:15:38 +0000 (0:00:00.604) 0:02:47.862 ******** 2026-03-07 01:18:00.705901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-07 01:18:00.705914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-07 01:18:00.705925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-07 01:18:00.705932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-07 01:18:00.705940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-07 01:18:00.705947 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:18:00.705960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-07 01:18:00.705974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-07 01:18:00.705982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-07 01:18:00.705996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-07 01:18:00.706004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-07 01:18:00.706012 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:18:00.706353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-07 01:18:00.706365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-07 01:18:00.706380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-07 01:18:00.706388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-07 01:18:00.706400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-07 01:18:00.706407 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:18:00.706415 | orchestrator | 2026-03-07 01:18:00.706422 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-07 01:18:00.706430 | orchestrator | Saturday 07 March 2026 01:15:39 +0000 (0:00:00.797) 0:02:48.659 ******** 2026-03-07 01:18:00.706437 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-07 01:18:00.706445 | orchestrator | 2026-03-07 01:18:00.706451 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-03-07 01:18:00.706458 | orchestrator | Saturday 07 March 2026 01:15:39 +0000 (0:00:00.639) 0:02:49.299 ******** 2026-03-07 01:18:00.706483 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-07 01:18:00.706497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-07 01:18:00.706514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-07 01:18:00.706522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-07 01:18:00.706533 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-07 01:18:00.706541 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-07 01:18:00.706547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-07 01:18:00.706558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-07 01:18:00.706571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-07 01:18:00.706578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-07 01:18:00.706586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-07 01:18:00.706597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-07 01:18:00.706605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:18:00.706618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:18:00.706630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:18:00.706638 | orchestrator | 2026-03-07 01:18:00.706646 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-03-07 01:18:00.706653 | orchestrator | Saturday 07 March 2026 01:15:45 +0000 (0:00:05.851) 0:02:55.151 ******** 2026-03-07 01:18:00.706660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-07 01:18:00.706668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-07 01:18:00.706679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-07 01:18:00.706687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-07 01:18:00.706702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-07 01:18:00.706710 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:18:00.706718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-07 01:18:00.706725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-07 01:18:00.706732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-07 01:18:00.706742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-07 01:18:00.706747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-07 01:18:00.706757 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:18:00.706767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-07 01:18:00.706773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-07 01:18:00.706779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-07 01:18:00.706785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-07 01:18:00.706795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-07 01:18:00.706802 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:18:00.706809 | orchestrator | 2026-03-07 01:18:00.706815 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-03-07 01:18:00.706821 | orchestrator | Saturday 07 March 2026 01:15:46 +0000 (0:00:00.760) 0:02:55.912 ******** 2026-03-07 01:18:00.706827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-07 01:18:00.706843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-07 01:18:00.707158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-07 01:18:00.707170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-07 01:18:00.707179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-07 01:18:00.707187 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:18:00.707201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-07 01:18:00.707220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-07 01:18:00.707235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-07 01:18:00.707243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-07 01:18:00.707251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-07 01:18:00.707258 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:18:00.707270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-07 01:18:00.707279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-07 01:18:00.707291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-07 01:18:00.707305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-07 01:18:00.707312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-07 01:18:00.707319 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:18:00.707326 | orchestrator | 2026-03-07 01:18:00.707334 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-03-07 01:18:00.707341 | orchestrator | Saturday 07 March 2026 01:15:47 +0000 (0:00:00.951) 0:02:56.863 ******** 2026-03-07 01:18:00.707349 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-07 01:18:00.707360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-07 01:18:00.707373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-07 01:18:00.707384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-07 01:18:00.707391 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-07 01:18:00.707399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-07 01:18:00.707406 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-07 01:18:00.707417 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-07 01:18:00.707430 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-07 01:18:00.707438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-07 01:18:00.707449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-07 01:18:00.707457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-07 01:18:00.707486 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:18:00.707493 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:18:00.707508 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:18:00.707514 | orchestrator | 2026-03-07 01:18:00.707521 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-03-07 01:18:00.707528 | orchestrator | Saturday 07 March 2026 01:15:52 +0000 (0:00:05.288) 0:03:02.151 ******** 2026-03-07 01:18:00.707535 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-07 01:18:00.707543 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-07 01:18:00.707550 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-07 01:18:00.707557 | orchestrator | 2026-03-07 01:18:00.707565 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-03-07 01:18:00.707572 | orchestrator | Saturday 07 March 2026 01:15:54 +0000 (0:00:02.284) 0:03:04.435 ******** 2026-03-07 01:18:00.707585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-07 01:18:00.707592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-07 01:18:00.707599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-07 01:18:00.707614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-07 01:18:00.707622 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-07 01:18:00.707630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-07 01:18:00.707641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-07 01:18:00.707648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-07 01:18:00.707656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-07 01:18:00.707667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-07 01:18:00.707677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-07 01:18:00.707685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-07 01:18:00.707693 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:18:00.707707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:18:00.707715 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:18:00.707722 | orchestrator | 2026-03-07 01:18:00.707729 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-03-07 01:18:00.707735 | orchestrator | Saturday 07 March 2026 01:16:13 +0000 (0:00:18.824) 0:03:23.260 ******** 2026-03-07 01:18:00.707748 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:18:00.707754 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:18:00.707760 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:18:00.707767 | orchestrator | 2026-03-07 01:18:00.707774 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-03-07 01:18:00.707781 | orchestrator | Saturday 07 March 2026 01:16:15 +0000 (0:00:01.631) 0:03:24.891 ******** 2026-03-07 01:18:00.707788 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-07 01:18:00.707795 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-07 01:18:00.707802 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-07 01:18:00.707809 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-07 01:18:00.707817 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-07 01:18:00.707823 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-07 01:18:00.707831 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-07 01:18:00.707837 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-07 01:18:00.707844 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-07 01:18:00.707851 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-07 01:18:00.707858 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-07 01:18:00.707865 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-07 01:18:00.707872 | orchestrator | 2026-03-07 01:18:00.707879 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-03-07 01:18:00.707890 | orchestrator | Saturday 07 March 2026 01:16:20 +0000 (0:00:05.595) 0:03:30.487 ******** 2026-03-07 01:18:00.707897 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-07 01:18:00.707904 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-07 01:18:00.707911 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-07 01:18:00.707918 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-07 01:18:00.707925 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-07 01:18:00.707932 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-07 01:18:00.707939 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-07 01:18:00.707946 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-07 01:18:00.707953 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-07 01:18:00.707960 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-07 01:18:00.707967 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-07 01:18:00.707974 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-07 01:18:00.707981 | orchestrator | 2026-03-07 01:18:00.707989 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-03-07 01:18:00.707995 | orchestrator | Saturday 07 March 2026 01:16:27 +0000 (0:00:06.160) 0:03:36.647 ******** 2026-03-07 01:18:00.708002 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-07 01:18:00.708009 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-07 01:18:00.708016 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-07 01:18:00.708023 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-07 01:18:00.708030 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-07 01:18:00.708037 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-07 01:18:00.708044 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-07 01:18:00.708052 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-07 01:18:00.708063 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-07 01:18:00.708078 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-07 01:18:00.708085 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-07 01:18:00.708092 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-07 01:18:00.708099 | orchestrator | 2026-03-07 01:18:00.708106 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-03-07 01:18:00.708113 | orchestrator | Saturday 07 March 2026 01:16:32 +0000 (0:00:05.245) 0:03:41.892 ******** 2026-03-07 01:18:00.708121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-07 01:18:00.708129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-07 01:18:00.708140 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-07 01:18:00.708147 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-07 01:18:00.708158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-07 01:18:00.708170 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-07 01:18:00.708178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-07 01:18:00.708185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-07 01:18:00.708196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-07 01:18:00.708204 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-07 01:18:00.708212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-07 01:18:00.708227 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-07 01:18:00.708235 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:18:00.708242 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:18:00.708248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-07 01:18:00.708255 | orchestrator | 2026-03-07 01:18:00.708262 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-07 01:18:00.708269 | orchestrator | Saturday 07 March 2026 01:16:36 +0000 (0:00:03.991) 0:03:45.884 ******** 2026-03-07 01:18:00.708277 | orchestrator | skipping: [testbed-node-0] 2026-03-07 01:18:00.708284 | orchestrator | skipping: [testbed-node-1] 2026-03-07 01:18:00.708291 | orchestrator | skipping: [testbed-node-2] 2026-03-07 01:18:00.708298 | orchestrator | 2026-03-07 01:18:00.708308 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-03-07 01:18:00.708315 | orchestrator | Saturday 07 March 2026 01:16:36 +0000 (0:00:00.352) 0:03:46.237 ******** 2026-03-07 01:18:00.708323 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:18:00.708330 | orchestrator | 2026-03-07 01:18:00.708337 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-03-07 01:18:00.708344 | orchestrator | Saturday 07 March 2026 01:16:38 +0000 (0:00:02.264) 0:03:48.502 ******** 2026-03-07 01:18:00.708350 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:18:00.708357 | orchestrator | 2026-03-07 01:18:00.708364 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-03-07 01:18:00.708371 | orchestrator | Saturday 07 March 2026 01:16:41 +0000 (0:00:02.252) 0:03:50.754 ******** 2026-03-07 01:18:00.708382 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:18:00.708389 | orchestrator | 2026-03-07 01:18:00.708396 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-03-07 01:18:00.708404 | orchestrator | Saturday 07 March 2026 01:16:43 +0000 (0:00:02.528) 0:03:53.283 ******** 2026-03-07 01:18:00.708411 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:18:00.708418 | orchestrator | 2026-03-07 01:18:00.708424 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-03-07 01:18:00.708431 | orchestrator | Saturday 07 March 2026 01:16:46 +0000 (0:00:03.150) 0:03:56.434 ******** 2026-03-07 01:18:00.708437 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:18:00.708444 | orchestrator | 2026-03-07 01:18:00.708451 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-07 01:18:00.708458 | orchestrator | Saturday 07 March 2026 01:17:11 +0000 (0:00:24.477) 0:04:20.911 ******** 2026-03-07 01:18:00.708487 | orchestrator | 2026-03-07 01:18:00.708495 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-07 01:18:00.708502 | orchestrator | Saturday 07 March 2026 01:17:11 +0000 (0:00:00.079) 0:04:20.991 ******** 2026-03-07 01:18:00.708509 | orchestrator | 2026-03-07 01:18:00.708516 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-07 01:18:00.708522 | orchestrator | Saturday 07 March 2026 01:17:11 +0000 (0:00:00.066) 0:04:21.058 ******** 2026-03-07 01:18:00.708529 | orchestrator | 2026-03-07 01:18:00.708536 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-03-07 01:18:00.708548 | orchestrator | Saturday 07 March 2026 2026-03-07 01:18:00 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-07 01:18:00.708555 | orchestrator | 01:17:11 +0000 (0:00:00.074) 0:04:21.132 ******** 2026-03-07 01:18:00.708563 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:18:00.708570 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:18:00.708577 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:18:00.708584 | orchestrator | 2026-03-07 01:18:00.708591 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-03-07 01:18:00.708599 | orchestrator | Saturday 07 March 2026 01:17:27 +0000 (0:00:15.957) 0:04:37.089 ******** 2026-03-07 01:18:00.708606 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:18:00.708613 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:18:00.708620 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:18:00.708626 | orchestrator | 2026-03-07 01:18:00.708633 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-03-07 01:18:00.708639 | orchestrator | Saturday 07 March 2026 01:17:34 +0000 (0:00:06.996) 0:04:44.086 ******** 2026-03-07 01:18:00.708646 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:18:00.708653 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:18:00.708661 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:18:00.708668 | orchestrator | 2026-03-07 01:18:00.708675 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-03-07 01:18:00.708682 | orchestrator | Saturday 07 March 2026 01:17:41 +0000 (0:00:06.733) 0:04:50.819 ******** 2026-03-07 01:18:00.708690 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:18:00.708697 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:18:00.708704 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:18:00.708711 | orchestrator | 2026-03-07 01:18:00.708718 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-03-07 01:18:00.708725 | orchestrator | Saturday 07 March 2026 01:17:47 +0000 (0:00:05.932) 0:04:56.751 ******** 2026-03-07 01:18:00.708732 | orchestrator | changed: [testbed-node-0] 2026-03-07 01:18:00.708738 | orchestrator | changed: [testbed-node-2] 2026-03-07 01:18:00.708743 | orchestrator | changed: [testbed-node-1] 2026-03-07 01:18:00.708749 | orchestrator | 2026-03-07 01:18:00.708755 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 01:18:00.708762 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-07 01:18:00.708774 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-07 01:18:00.708781 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-07 01:18:00.708788 | orchestrator | 2026-03-07 01:18:00.708795 | orchestrator | 2026-03-07 01:18:00.708802 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 01:18:00.708809 | orchestrator | Saturday 07 March 2026 01:17:58 +0000 (0:00:11.007) 0:05:07.759 ******** 2026-03-07 01:18:00.708817 | orchestrator | =============================================================================== 2026-03-07 01:18:00.708824 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 24.48s 2026-03-07 01:18:00.708831 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 18.82s 2026-03-07 01:18:00.708838 | orchestrator | octavia : Add rules for security groups -------------------------------- 18.18s 2026-03-07 01:18:00.708845 | orchestrator | octavia : Adding octavia related roles --------------------------------- 18.03s 2026-03-07 01:18:00.708856 | orchestrator | octavia : Restart octavia-api container -------------------------------- 15.96s 2026-03-07 01:18:00.708863 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 11.01s 2026-03-07 01:18:00.708870 | orchestrator | octavia : Create security groups for octavia --------------------------- 10.97s 2026-03-07 01:18:00.708877 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 9.04s 2026-03-07 01:18:00.708883 | orchestrator | octavia : Get security groups for octavia ------------------------------- 8.25s 2026-03-07 01:18:00.708890 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.87s 2026-03-07 01:18:00.708897 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 7.42s 2026-03-07 01:18:00.708904 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 7.00s 2026-03-07 01:18:00.708911 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 6.73s 2026-03-07 01:18:00.708918 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 6.22s 2026-03-07 01:18:00.708925 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 6.16s 2026-03-07 01:18:00.708933 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 5.93s 2026-03-07 01:18:00.708940 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.89s 2026-03-07 01:18:00.708947 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.85s 2026-03-07 01:18:00.708954 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 5.72s 2026-03-07 01:18:00.708961 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.60s 2026-03-07 01:18:03.737149 | orchestrator | 2026-03-07 01:18:03 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-07 01:18:06.775237 | orchestrator | 2026-03-07 01:18:06 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-07 01:18:09.818272 | orchestrator | 2026-03-07 01:18:09 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-07 01:18:12.862052 | orchestrator | 2026-03-07 01:18:12 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-07 01:18:15.902563 | orchestrator | 2026-03-07 01:18:15 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-07 01:18:18.942852 | orchestrator | 2026-03-07 01:18:18 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-07 01:18:21.988578 | orchestrator | 2026-03-07 01:18:21 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-07 01:18:25.053999 | orchestrator | 2026-03-07 01:18:25 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-07 01:18:28.097578 | orchestrator | 2026-03-07 01:18:28 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-07 01:18:31.135077 | orchestrator | 2026-03-07 01:18:31 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-07 01:18:34.171338 | orchestrator | 2026-03-07 01:18:34 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-07 01:18:37.212449 | orchestrator | 2026-03-07 01:18:37 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-07 01:18:40.255189 | orchestrator | 2026-03-07 01:18:40 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-07 01:18:43.293042 | orchestrator | 2026-03-07 01:18:43 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-07 01:18:46.332885 | orchestrator | 2026-03-07 01:18:46 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-07 01:18:49.370675 | orchestrator | 2026-03-07 01:18:49 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-07 01:18:52.408895 | orchestrator | 2026-03-07 01:18:52 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-07 01:18:55.446330 | orchestrator | 2026-03-07 01:18:55 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-07 01:18:58.491595 | orchestrator | 2026-03-07 01:18:58 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-07 01:19:01.531750 | orchestrator | 2026-03-07 01:19:01.911881 | orchestrator | 2026-03-07 01:19:01.922319 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Sat Mar 7 01:19:01 UTC 2026 2026-03-07 01:19:01.922486 | orchestrator | 2026-03-07 01:19:02.283590 | orchestrator | ok: Runtime: 0:37:43.172188 2026-03-07 01:19:02.546557 | 2026-03-07 01:19:02.546697 | TASK [Bootstrap services] 2026-03-07 01:19:03.339629 | orchestrator | 2026-03-07 01:19:03.339784 | orchestrator | # BOOTSTRAP 2026-03-07 01:19:03.339800 | orchestrator | 2026-03-07 01:19:03.339810 | orchestrator | + set -e 2026-03-07 01:19:03.339817 | orchestrator | + echo 2026-03-07 01:19:03.339825 | orchestrator | + echo '# BOOTSTRAP' 2026-03-07 01:19:03.339837 | orchestrator | + echo 2026-03-07 01:19:03.339865 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-03-07 01:19:03.349589 | orchestrator | + set -e 2026-03-07 01:19:03.349692 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-03-07 01:19:08.892649 | orchestrator | 2026-03-07 01:19:08 | INFO  | It takes a moment until task 28fd92f0-8433-4b79-bf19-128b5185c176 (flavor-manager) has been started and output is visible here. 2026-03-07 01:19:17.333935 | orchestrator | 2026-03-07 01:19:12 | INFO  | Flavor SCS-1L-1 created 2026-03-07 01:19:17.334076 | orchestrator | 2026-03-07 01:19:12 | INFO  | Flavor SCS-1L-1-5 created 2026-03-07 01:19:17.334092 | orchestrator | 2026-03-07 01:19:12 | INFO  | Flavor SCS-1V-2 created 2026-03-07 01:19:17.334104 | orchestrator | 2026-03-07 01:19:13 | INFO  | Flavor SCS-1V-2-5 created 2026-03-07 01:19:17.334108 | orchestrator | 2026-03-07 01:19:13 | INFO  | Flavor SCS-1V-4 created 2026-03-07 01:19:17.334112 | orchestrator | 2026-03-07 01:19:13 | INFO  | Flavor SCS-1V-4-10 created 2026-03-07 01:19:17.334116 | orchestrator | 2026-03-07 01:19:13 | INFO  | Flavor SCS-1V-8 created 2026-03-07 01:19:17.334121 | orchestrator | 2026-03-07 01:19:13 | INFO  | Flavor SCS-1V-8-20 created 2026-03-07 01:19:17.334134 | orchestrator | 2026-03-07 01:19:13 | INFO  | Flavor SCS-2V-4 created 2026-03-07 01:19:17.334138 | orchestrator | 2026-03-07 01:19:13 | INFO  | Flavor SCS-2V-4-10 created 2026-03-07 01:19:17.334142 | orchestrator | 2026-03-07 01:19:14 | INFO  | Flavor SCS-2V-8 created 2026-03-07 01:19:17.334146 | orchestrator | 2026-03-07 01:19:14 | INFO  | Flavor SCS-2V-8-20 created 2026-03-07 01:19:17.334150 | orchestrator | 2026-03-07 01:19:14 | INFO  | Flavor SCS-2V-16 created 2026-03-07 01:19:17.334154 | orchestrator | 2026-03-07 01:19:14 | INFO  | Flavor SCS-2V-16-50 created 2026-03-07 01:19:17.334158 | orchestrator | 2026-03-07 01:19:14 | INFO  | Flavor SCS-4V-8 created 2026-03-07 01:19:17.334161 | orchestrator | 2026-03-07 01:19:15 | INFO  | Flavor SCS-4V-8-20 created 2026-03-07 01:19:17.334165 | orchestrator | 2026-03-07 01:19:15 | INFO  | Flavor SCS-4V-16 created 2026-03-07 01:19:17.334169 | orchestrator | 2026-03-07 01:19:15 | INFO  | Flavor SCS-4V-16-50 created 2026-03-07 01:19:17.334173 | orchestrator | 2026-03-07 01:19:15 | INFO  | Flavor SCS-4V-32 created 2026-03-07 01:19:17.334177 | orchestrator | 2026-03-07 01:19:15 | INFO  | Flavor SCS-4V-32-100 created 2026-03-07 01:19:17.334181 | orchestrator | 2026-03-07 01:19:15 | INFO  | Flavor SCS-8V-16 created 2026-03-07 01:19:17.334185 | orchestrator | 2026-03-07 01:19:15 | INFO  | Flavor SCS-8V-16-50 created 2026-03-07 01:19:17.334189 | orchestrator | 2026-03-07 01:19:16 | INFO  | Flavor SCS-8V-32 created 2026-03-07 01:19:17.334192 | orchestrator | 2026-03-07 01:19:16 | INFO  | Flavor SCS-8V-32-100 created 2026-03-07 01:19:17.334196 | orchestrator | 2026-03-07 01:19:16 | INFO  | Flavor SCS-16V-32 created 2026-03-07 01:19:17.334200 | orchestrator | 2026-03-07 01:19:16 | INFO  | Flavor SCS-16V-32-100 created 2026-03-07 01:19:17.334204 | orchestrator | 2026-03-07 01:19:16 | INFO  | Flavor SCS-2V-4-20s created 2026-03-07 01:19:17.334208 | orchestrator | 2026-03-07 01:19:16 | INFO  | Flavor SCS-4V-8-50s created 2026-03-07 01:19:17.334211 | orchestrator | 2026-03-07 01:19:17 | INFO  | Flavor SCS-8V-32-100s created 2026-03-07 01:19:20.123058 | orchestrator | 2026-03-07 01:19:20 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-03-07 01:19:30.255621 | orchestrator | 2026-03-07 01:19:30 | INFO  | Task 7de0472d-d62e-4202-997e-a32d33b7c21f (bootstrap-basic) was prepared for execution. 2026-03-07 01:19:30.255773 | orchestrator | 2026-03-07 01:19:30 | INFO  | It takes a moment until task 7de0472d-d62e-4202-997e-a32d33b7c21f (bootstrap-basic) has been started and output is visible here. 2026-03-07 01:20:23.662098 | orchestrator | 2026-03-07 01:20:23.662200 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-03-07 01:20:23.662213 | orchestrator | 2026-03-07 01:20:23.662260 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-07 01:20:23.662269 | orchestrator | Saturday 07 March 2026 01:19:35 +0000 (0:00:00.086) 0:00:00.086 ******** 2026-03-07 01:20:23.662277 | orchestrator | ok: [localhost] 2026-03-07 01:20:23.662286 | orchestrator | 2026-03-07 01:20:23.662293 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-03-07 01:20:23.662300 | orchestrator | Saturday 07 March 2026 01:19:37 +0000 (0:00:02.088) 0:00:02.175 ******** 2026-03-07 01:20:23.662307 | orchestrator | ok: [localhost] 2026-03-07 01:20:23.662314 | orchestrator | 2026-03-07 01:20:23.662332 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-03-07 01:20:23.662347 | orchestrator | Saturday 07 March 2026 01:19:48 +0000 (0:00:11.564) 0:00:13.739 ******** 2026-03-07 01:20:23.662354 | orchestrator | changed: [localhost] 2026-03-07 01:20:23.662361 | orchestrator | 2026-03-07 01:20:23.662368 | orchestrator | TASK [Create public network] *************************************************** 2026-03-07 01:20:23.662376 | orchestrator | Saturday 07 March 2026 01:19:57 +0000 (0:00:08.110) 0:00:21.850 ******** 2026-03-07 01:20:23.662383 | orchestrator | changed: [localhost] 2026-03-07 01:20:23.662391 | orchestrator | 2026-03-07 01:20:23.662398 | orchestrator | TASK [Set public network to default] ******************************************* 2026-03-07 01:20:23.662404 | orchestrator | Saturday 07 March 2026 01:20:02 +0000 (0:00:05.772) 0:00:27.623 ******** 2026-03-07 01:20:23.662415 | orchestrator | changed: [localhost] 2026-03-07 01:20:23.662422 | orchestrator | 2026-03-07 01:20:23.662429 | orchestrator | TASK [Create public subnet] **************************************************** 2026-03-07 01:20:23.662436 | orchestrator | Saturday 07 March 2026 01:20:10 +0000 (0:00:07.338) 0:00:34.962 ******** 2026-03-07 01:20:23.662442 | orchestrator | changed: [localhost] 2026-03-07 01:20:23.662449 | orchestrator | 2026-03-07 01:20:23.662456 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-03-07 01:20:23.662463 | orchestrator | Saturday 07 March 2026 01:20:15 +0000 (0:00:04.913) 0:00:39.875 ******** 2026-03-07 01:20:23.662470 | orchestrator | changed: [localhost] 2026-03-07 01:20:23.662476 | orchestrator | 2026-03-07 01:20:23.662483 | orchestrator | TASK [Create manager role] ***************************************************** 2026-03-07 01:20:23.662499 | orchestrator | Saturday 07 March 2026 01:20:19 +0000 (0:00:04.178) 0:00:44.054 ******** 2026-03-07 01:20:23.662506 | orchestrator | ok: [localhost] 2026-03-07 01:20:23.662513 | orchestrator | 2026-03-07 01:20:23.662519 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-07 01:20:23.662526 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-07 01:20:23.662535 | orchestrator | 2026-03-07 01:20:23.662541 | orchestrator | 2026-03-07 01:20:23.662548 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-07 01:20:23.662555 | orchestrator | Saturday 07 March 2026 01:20:23 +0000 (0:00:04.047) 0:00:48.101 ******** 2026-03-07 01:20:23.662562 | orchestrator | =============================================================================== 2026-03-07 01:20:23.662568 | orchestrator | Get volume type LUKS --------------------------------------------------- 11.56s 2026-03-07 01:20:23.662576 | orchestrator | Create volume type LUKS ------------------------------------------------- 8.11s 2026-03-07 01:20:23.662582 | orchestrator | Set public network to default ------------------------------------------- 7.34s 2026-03-07 01:20:23.662589 | orchestrator | Create public network --------------------------------------------------- 5.77s 2026-03-07 01:20:23.662616 | orchestrator | Create public subnet ---------------------------------------------------- 4.91s 2026-03-07 01:20:23.662623 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 4.18s 2026-03-07 01:20:23.662630 | orchestrator | Create manager role ----------------------------------------------------- 4.05s 2026-03-07 01:20:23.662637 | orchestrator | Gathering Facts --------------------------------------------------------- 2.09s 2026-03-07 01:20:26.464286 | orchestrator | 2026-03-07 01:20:26 | INFO  | It takes a moment until task e6301fac-a4a7-4355-b888-7afab16cca6b (image-manager) has been started and output is visible here. 2026-03-07 01:21:10.583849 | orchestrator | 2026-03-07 01:20:29 | INFO  | Processing image 'Cirros 0.6.2' 2026-03-07 01:21:10.583961 | orchestrator | 2026-03-07 01:20:29 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-03-07 01:21:10.583974 | orchestrator | 2026-03-07 01:20:29 | INFO  | Importing image Cirros 0.6.2 2026-03-07 01:21:10.583981 | orchestrator | 2026-03-07 01:20:29 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-07 01:21:10.583989 | orchestrator | 2026-03-07 01:20:31 | INFO  | Waiting for image to leave queued state... 2026-03-07 01:21:10.583997 | orchestrator | 2026-03-07 01:20:33 | INFO  | Waiting for import to complete... 2026-03-07 01:21:10.584005 | orchestrator | 2026-03-07 01:20:44 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-03-07 01:21:10.584013 | orchestrator | 2026-03-07 01:20:44 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-03-07 01:21:10.584020 | orchestrator | 2026-03-07 01:20:44 | INFO  | Setting internal_version = 0.6.2 2026-03-07 01:21:10.584027 | orchestrator | 2026-03-07 01:20:44 | INFO  | Setting image_original_user = cirros 2026-03-07 01:21:10.584034 | orchestrator | 2026-03-07 01:20:44 | INFO  | Adding tag os:cirros 2026-03-07 01:21:10.584041 | orchestrator | 2026-03-07 01:20:44 | INFO  | Setting property architecture: x86_64 2026-03-07 01:21:10.584048 | orchestrator | 2026-03-07 01:20:45 | INFO  | Setting property hw_disk_bus: scsi 2026-03-07 01:21:10.584054 | orchestrator | 2026-03-07 01:20:45 | INFO  | Setting property hw_rng_model: virtio 2026-03-07 01:21:10.584062 | orchestrator | 2026-03-07 01:20:45 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-07 01:21:10.584068 | orchestrator | 2026-03-07 01:20:45 | INFO  | Setting property hw_watchdog_action: reset 2026-03-07 01:21:10.584076 | orchestrator | 2026-03-07 01:20:46 | INFO  | Setting property hypervisor_type: qemu 2026-03-07 01:21:10.584083 | orchestrator | 2026-03-07 01:20:46 | INFO  | Setting property os_distro: cirros 2026-03-07 01:21:10.584090 | orchestrator | 2026-03-07 01:20:46 | INFO  | Setting property os_purpose: minimal 2026-03-07 01:21:10.584097 | orchestrator | 2026-03-07 01:20:46 | INFO  | Setting property replace_frequency: never 2026-03-07 01:21:10.584104 | orchestrator | 2026-03-07 01:20:47 | INFO  | Setting property uuid_validity: none 2026-03-07 01:21:10.584110 | orchestrator | 2026-03-07 01:20:47 | INFO  | Setting property provided_until: none 2026-03-07 01:21:10.584117 | orchestrator | 2026-03-07 01:20:47 | INFO  | Setting property image_description: Cirros 2026-03-07 01:21:10.584123 | orchestrator | 2026-03-07 01:20:47 | INFO  | Setting property image_name: Cirros 2026-03-07 01:21:10.584129 | orchestrator | 2026-03-07 01:20:48 | INFO  | Setting property internal_version: 0.6.2 2026-03-07 01:21:10.584135 | orchestrator | 2026-03-07 01:20:48 | INFO  | Setting property image_original_user: cirros 2026-03-07 01:21:10.584258 | orchestrator | 2026-03-07 01:20:48 | INFO  | Setting property os_version: 0.6.2 2026-03-07 01:21:10.584278 | orchestrator | 2026-03-07 01:20:49 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-07 01:21:10.584287 | orchestrator | 2026-03-07 01:20:49 | INFO  | Setting property image_build_date: 2023-05-30 2026-03-07 01:21:10.584292 | orchestrator | 2026-03-07 01:20:49 | INFO  | Checking status of 'Cirros 0.6.2' 2026-03-07 01:21:10.584298 | orchestrator | 2026-03-07 01:20:49 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-03-07 01:21:10.584304 | orchestrator | 2026-03-07 01:20:49 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-03-07 01:21:10.584309 | orchestrator | 2026-03-07 01:20:50 | INFO  | Processing image 'Cirros 0.6.3' 2026-03-07 01:21:10.584319 | orchestrator | 2026-03-07 01:20:50 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-03-07 01:21:10.584326 | orchestrator | 2026-03-07 01:20:50 | INFO  | Importing image Cirros 0.6.3 2026-03-07 01:21:10.584332 | orchestrator | 2026-03-07 01:20:50 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-07 01:21:10.584338 | orchestrator | 2026-03-07 01:20:51 | INFO  | Waiting for image to leave queued state... 2026-03-07 01:21:10.584344 | orchestrator | 2026-03-07 01:20:53 | INFO  | Waiting for import to complete... 2026-03-07 01:21:10.584366 | orchestrator | 2026-03-07 01:21:03 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-03-07 01:21:10.584372 | orchestrator | 2026-03-07 01:21:04 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-03-07 01:21:10.584378 | orchestrator | 2026-03-07 01:21:04 | INFO  | Setting internal_version = 0.6.3 2026-03-07 01:21:10.584384 | orchestrator | 2026-03-07 01:21:04 | INFO  | Setting image_original_user = cirros 2026-03-07 01:21:10.584391 | orchestrator | 2026-03-07 01:21:04 | INFO  | Adding tag os:cirros 2026-03-07 01:21:10.584397 | orchestrator | 2026-03-07 01:21:04 | INFO  | Setting property architecture: x86_64 2026-03-07 01:21:10.584403 | orchestrator | 2026-03-07 01:21:04 | INFO  | Setting property hw_disk_bus: scsi 2026-03-07 01:21:10.584409 | orchestrator | 2026-03-07 01:21:05 | INFO  | Setting property hw_rng_model: virtio 2026-03-07 01:21:10.584415 | orchestrator | 2026-03-07 01:21:05 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-07 01:21:10.584421 | orchestrator | 2026-03-07 01:21:05 | INFO  | Setting property hw_watchdog_action: reset 2026-03-07 01:21:10.584427 | orchestrator | 2026-03-07 01:21:06 | INFO  | Setting property hypervisor_type: qemu 2026-03-07 01:21:10.584434 | orchestrator | 2026-03-07 01:21:06 | INFO  | Setting property os_distro: cirros 2026-03-07 01:21:10.584440 | orchestrator | 2026-03-07 01:21:06 | INFO  | Setting property os_purpose: minimal 2026-03-07 01:21:10.584445 | orchestrator | 2026-03-07 01:21:06 | INFO  | Setting property replace_frequency: never 2026-03-07 01:21:10.584452 | orchestrator | 2026-03-07 01:21:07 | INFO  | Setting property uuid_validity: none 2026-03-07 01:21:10.584458 | orchestrator | 2026-03-07 01:21:07 | INFO  | Setting property provided_until: none 2026-03-07 01:21:10.584464 | orchestrator | 2026-03-07 01:21:07 | INFO  | Setting property image_description: Cirros 2026-03-07 01:21:10.584470 | orchestrator | 2026-03-07 01:21:08 | INFO  | Setting property image_name: Cirros 2026-03-07 01:21:10.584477 | orchestrator | 2026-03-07 01:21:08 | INFO  | Setting property internal_version: 0.6.3 2026-03-07 01:21:10.584493 | orchestrator | 2026-03-07 01:21:08 | INFO  | Setting property image_original_user: cirros 2026-03-07 01:21:10.584500 | orchestrator | 2026-03-07 01:21:08 | INFO  | Setting property os_version: 0.6.3 2026-03-07 01:21:10.584506 | orchestrator | 2026-03-07 01:21:09 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-07 01:21:10.584513 | orchestrator | 2026-03-07 01:21:09 | INFO  | Setting property image_build_date: 2024-09-26 2026-03-07 01:21:10.584519 | orchestrator | 2026-03-07 01:21:09 | INFO  | Checking status of 'Cirros 0.6.3' 2026-03-07 01:21:10.584525 | orchestrator | 2026-03-07 01:21:09 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-03-07 01:21:10.584532 | orchestrator | 2026-03-07 01:21:09 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-03-07 01:21:10.964395 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-03-07 01:21:13.730287 | orchestrator | 2026-03-07 01:21:13 | INFO  | date: 2026-03-06 2026-03-07 01:21:13.730418 | orchestrator | 2026-03-07 01:21:13 | INFO  | image: octavia-amphora-haproxy-2024.2.20260306.qcow2 2026-03-07 01:21:13.730463 | orchestrator | 2026-03-07 01:21:13 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260306.qcow2 2026-03-07 01:21:13.730478 | orchestrator | 2026-03-07 01:21:13 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260306.qcow2.CHECKSUM 2026-03-07 01:21:13.820741 | orchestrator | 2026-03-07 01:21:13 | INFO  | checksum: localhost | ok: "/var/lib/zuul/builds/c8bc494999dd46d891a476a01b0f8e08/work/logs" 2026-03-07 01:21:45.740650 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/c8bc494999dd46d891a476a01b0f8e08/work/artifacts" 2026-03-07 01:21:46.014324 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/c8bc494999dd46d891a476a01b0f8e08/work/docs" 2026-03-07 01:21:46.037861 | 2026-03-07 01:21:46.038693 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-03-07 01:21:46.967123 | orchestrator | changed: .d..t...... ./ 2026-03-07 01:21:46.967385 | orchestrator | changed: All items complete 2026-03-07 01:21:46.967421 | 2026-03-07 01:21:47.669668 | orchestrator | changed: .d..t...... ./ 2026-03-07 01:21:48.361384 | orchestrator | changed: .d..t...... ./ 2026-03-07 01:21:48.400888 | 2026-03-07 01:21:48.401063 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-03-07 01:21:48.453455 | orchestrator | skipping: Conditional result was False 2026-03-07 01:21:48.463782 | orchestrator | skipping: Conditional result was False 2026-03-07 01:21:48.476563 | 2026-03-07 01:21:48.476663 | PLAY RECAP 2026-03-07 01:21:48.476724 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-03-07 01:21:48.476755 | 2026-03-07 01:21:48.608954 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-03-07 01:21:48.609982 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-07 01:21:49.364099 | 2026-03-07 01:21:49.364270 | PLAY [Base post] 2026-03-07 01:21:49.378979 | 2026-03-07 01:21:49.379115 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-03-07 01:21:50.421232 | orchestrator | changed 2026-03-07 01:21:50.433729 | 2026-03-07 01:21:50.433919 | PLAY RECAP 2026-03-07 01:21:50.434035 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-03-07 01:21:50.434178 | 2026-03-07 01:21:50.596753 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-07 01:21:50.598545 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-03-07 01:21:51.435699 | 2026-03-07 01:21:51.435874 | PLAY [Base post-logs] 2026-03-07 01:21:51.446657 | 2026-03-07 01:21:51.446788 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-03-07 01:21:51.907567 | localhost | changed 2026-03-07 01:21:51.917621 | 2026-03-07 01:21:51.917761 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-03-07 01:21:51.953499 | localhost | ok 2026-03-07 01:21:51.957768 | 2026-03-07 01:21:51.957890 | TASK [Set zuul-log-path fact] 2026-03-07 01:21:51.974908 | localhost | ok 2026-03-07 01:21:51.985715 | 2026-03-07 01:21:51.985836 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-07 01:21:52.012215 | localhost | ok 2026-03-07 01:21:52.017013 | 2026-03-07 01:21:52.017195 | TASK [upload-logs : Create log directories] 2026-03-07 01:21:52.522972 | localhost | changed 2026-03-07 01:21:52.526992 | 2026-03-07 01:21:52.527118 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-03-07 01:21:53.019839 | localhost -> localhost | ok: Runtime: 0:00:00.008348 2026-03-07 01:21:53.029313 | 2026-03-07 01:21:53.029535 | TASK [upload-logs : Upload logs to log server] 2026-03-07 01:21:53.598401 | localhost | Output suppressed because no_log was given 2026-03-07 01:21:53.603182 | 2026-03-07 01:21:53.603401 | LOOP [upload-logs : Compress console log and json output] 2026-03-07 01:21:53.664206 | localhost | skipping: Conditional result was False 2026-03-07 01:21:53.669736 | localhost | skipping: Conditional result was False 2026-03-07 01:21:53.677098 | 2026-03-07 01:21:53.677403 | LOOP [upload-logs : Upload compressed console log and json output] 2026-03-07 01:21:53.725513 | localhost | skipping: Conditional result was False 2026-03-07 01:21:53.726095 | 2026-03-07 01:21:53.729906 | localhost | skipping: Conditional result was False 2026-03-07 01:21:53.743677 | 2026-03-07 01:21:53.743931 | LOOP [upload-logs : Upload console log and json output]